Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
599,994 | 18,287,494,385 | IssuesEvent | 2021-10-05 12:00:29 | opensrp/opensrp-server-web | https://api.github.com/repos/opensrp/opensrp-server-web | closed | Deletion of duplicate Plans on Thailand Production OpenSRP DB | Priority: High | There seems to have been duplication of plans in the last attempt to reprocess case triggered plans manually. The plans below have all been found to be duplicate and should be deleted from the Thailand OpenSRP Production DB.
1. 5fc245ff-ad55-48be-8f44-c0a4f8f78b23
2. 976ff742-f314-409d-a969-34963ca8fe0d
3. 3d8c70a2-81f1-4860-b121-e6b7fe0776a2
4. 42dce986-58f0-40cc-a73e-89004cc182a9
5. 45dff529-a004-482a-8a7b-b826b0fea68b
6. d6a2ae2f-c283-4559-a35d-a998981012f5
7. 6789bd52-8aae-474f-b70e-d5d59ba1f376
8. ed2445ec-242a-440e-83c5-6d1562b38ab2
9. de5bc516-93d3-44d6-9e8e-9711db7d5eac
10. bb4afa58-956c-478f-b393-6ffb2c178052
11. 0807cc42-06d6-4239-9c0e-14013a0c4f43
12. 506bcd7e-9fbc-40da-a00f-eff7219393c6
13. f17c3cae-7542-4da6-a0c4-02262f018e03
14. 8e987751-0506-487f-a773-fc9f1b68763b
15. 71debf0c-d7fa-4144-ad84-ff156d136701
16. 16d65570-7796-4f95-ab0c-66b1df861fdb
17. 15c9448a-71aa-4350-b873-97e750cd7118
18. abb28557-64bf-4024-8137-5468dbae1f52
19. 60b3853e-998b-4274-aaf3-625a26eb080e
20. ef9475d4-872c-4794-a891-da284a3c52d3
21. ccec8ff9-694c-49b7-b5a8-712b1a98b8be
22. 46e77801-0aa3-43b4-b378-84c130c6dd30
23. 1400b3ef-af0f-43e4-bd15-f0ab020970e4
24. 2e9425a2-1ac7-43b7-9875-8af66d563610
25. e52f524b-4bdf-4d70-b8c4-4e71a1235863
26. b2168be2-f6d2-436c-99d5-ea7d49f3d840
27. 5a49e21f-7605-4776-8d4c-b785be7f6d78
28. 7a6188a2-23eb-444f-bd06-e6bae33b70dc
29. c88a11a1-61fe-40b9-8431-883e9b599883
30. 390ab86e-90e6-4994-b825-4fc84bf0740e
31. ed8bdf6f-71ca-4e02-afed-45c41501b46a
32. 76a49a0e-3026-466e-98a3-1270fa58b8b8
33. 5a2b2062-71c7-4951-849f-6eb57055130f
34. 297b17f4-7544-4044-9e29-221412732852
35. 2e07b117-7dfc-4222-ba0a-7182f66f9a1f
36. cc480a3f-1def-4e19-aa6d-af2ad6c320c1
37. 75eb0bba-7137-46f9-816a-53e97e499b38
38. bacf8094-3158-4db6-843e-b41a2c5202de
39. af7a1584-61db-4ad0-b3c9-7be5d6756626
40. afcc89d4-f26c-412b-bfe7-95300957096d
41. 2c42e609-5c7a-481f-952c-eedaf4a4cdba
42. 359f8f39-7943-4c26-8081-5c6518c46af3
43. 8f0c4627-a0db-438e-aa9d-98d81f7de625
44. e295a8ac-5b0b-4c89-9a96-7d4289260fd3
45. f0fe32f7-91aa-4eef-a2a2-3a0fdfc262f0
46. 9c11f84e-ee55-4bc0-a802-55d307f3390f
47. 35918c2c-3d95-4dcf-a07f-c00042ebac06
48. 63657a9c-3af6-48ba-8401-481a043bb125
49. 40f76456-74d1-4348-af72-ad810403ba11 | 1.0 | Deletion of duplicate Plans on Thailand Production OpenSRP DB - There seems to have been duplication of plans in the last attempt to reprocess case triggered plans manually. The plans below have all been found to be duplicate and should be deleted from the Thailand OpenSRP Production DB.
1. 5fc245ff-ad55-48be-8f44-c0a4f8f78b23
2. 976ff742-f314-409d-a969-34963ca8fe0d
3. 3d8c70a2-81f1-4860-b121-e6b7fe0776a2
4. 42dce986-58f0-40cc-a73e-89004cc182a9
5. 45dff529-a004-482a-8a7b-b826b0fea68b
6. d6a2ae2f-c283-4559-a35d-a998981012f5
7. 6789bd52-8aae-474f-b70e-d5d59ba1f376
8. ed2445ec-242a-440e-83c5-6d1562b38ab2
9. de5bc516-93d3-44d6-9e8e-9711db7d5eac
10. bb4afa58-956c-478f-b393-6ffb2c178052
11. 0807cc42-06d6-4239-9c0e-14013a0c4f43
12. 506bcd7e-9fbc-40da-a00f-eff7219393c6
13. f17c3cae-7542-4da6-a0c4-02262f018e03
14. 8e987751-0506-487f-a773-fc9f1b68763b
15. 71debf0c-d7fa-4144-ad84-ff156d136701
16. 16d65570-7796-4f95-ab0c-66b1df861fdb
17. 15c9448a-71aa-4350-b873-97e750cd7118
18. abb28557-64bf-4024-8137-5468dbae1f52
19. 60b3853e-998b-4274-aaf3-625a26eb080e
20. ef9475d4-872c-4794-a891-da284a3c52d3
21. ccec8ff9-694c-49b7-b5a8-712b1a98b8be
22. 46e77801-0aa3-43b4-b378-84c130c6dd30
23. 1400b3ef-af0f-43e4-bd15-f0ab020970e4
24. 2e9425a2-1ac7-43b7-9875-8af66d563610
25. e52f524b-4bdf-4d70-b8c4-4e71a1235863
26. b2168be2-f6d2-436c-99d5-ea7d49f3d840
27. 5a49e21f-7605-4776-8d4c-b785be7f6d78
28. 7a6188a2-23eb-444f-bd06-e6bae33b70dc
29. c88a11a1-61fe-40b9-8431-883e9b599883
30. 390ab86e-90e6-4994-b825-4fc84bf0740e
31. ed8bdf6f-71ca-4e02-afed-45c41501b46a
32. 76a49a0e-3026-466e-98a3-1270fa58b8b8
33. 5a2b2062-71c7-4951-849f-6eb57055130f
34. 297b17f4-7544-4044-9e29-221412732852
35. 2e07b117-7dfc-4222-ba0a-7182f66f9a1f
36. cc480a3f-1def-4e19-aa6d-af2ad6c320c1
37. 75eb0bba-7137-46f9-816a-53e97e499b38
38. bacf8094-3158-4db6-843e-b41a2c5202de
39. af7a1584-61db-4ad0-b3c9-7be5d6756626
40. afcc89d4-f26c-412b-bfe7-95300957096d
41. 2c42e609-5c7a-481f-952c-eedaf4a4cdba
42. 359f8f39-7943-4c26-8081-5c6518c46af3
43. 8f0c4627-a0db-438e-aa9d-98d81f7de625
44. e295a8ac-5b0b-4c89-9a96-7d4289260fd3
45. f0fe32f7-91aa-4eef-a2a2-3a0fdfc262f0
46. 9c11f84e-ee55-4bc0-a802-55d307f3390f
47. 35918c2c-3d95-4dcf-a07f-c00042ebac06
48. 63657a9c-3af6-48ba-8401-481a043bb125
49. 40f76456-74d1-4348-af72-ad810403ba11 | priority | deletion of duplicate plans on thailand production opensrp db there seems to have been duplication of plans in the last attempt to reprocess case triggered plans manually the plans below have all been found to be duplicate and should be deleted from the thailand opensrp production db afed | 1 |
176,753 | 6,564,685,953 | IssuesEvent | 2017-09-08 03:26:09 | scitran/core | https://api.github.com/repos/scitran/core | closed | Download endpoint creating new subject folder for each acquisition | Bug High Priority | From @lmperry:
Selecting a single session with many acquisitions:

Leads to many subject folders, each with one of the acquisitions:

It is likely the path logic that finds an existing folder for the file before creating a new one is broken. | 1.0 | Download endpoint creating new subject folder for each acquisition - From @lmperry:
Selecting a single session with many acquisitions:

Leads to many subject folders, each with one of the acquisitions:

It is likely the path logic that finds an existing folder for the file before creating a new one is broken. | priority | download endpoint creating new subject folder for each acquisition from lmperry selecting a single session with many acquisitions leads to many subject folders each with one of the acquisitions it is likely the path logic that finds an existing folder for the file before creating a new one is broken | 1 |
782,189 | 27,489,665,857 | IssuesEvent | 2023-03-04 13:08:44 | storybookjs/react-native | https://api.github.com/repos/storybookjs/react-native | closed | storybook does not handle react-native@0.61.0 new fast refresh mode | bug high priority in progress has PR | **Describe the bug**
Following the release of react-native@0.61.0-rc.0, legacy "hot module reloading" and "live reloading" have been merged in a single new "fast refresh" mode to hot reload components (more details in this [blog post](https://medium.com/@jonnyburger/first-look-react-native-0-61-with-fast-reloading-ad387e502e6f)).
Unfortunately, this new mode does not work with storybook as I get ` WARN Story with id button--default-view already exists in the store!` errors when fast refresh kicks in.
Since livereload isn't an option anymore, it makes developing components with storybook highly impractical as you don't have any refresh anymore.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new react-native project with `react-native init AwesomeProject --version react-native@next`
2. Install storybook, and save/update a story
3. See warning, no refresh
**Expected behavior**
Working fast refresh while editing a story
**System:**
```
Environment Info:
System:
OS: macOS 10.14.5
CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Binaries:
Node: 10.16.0 - ~/.nvm/versions/node/v10.16.0/bin/node
Yarn: 1.17.3 - /usr/local/bin/yarn
npm: 6.9.0 - ~/.nvm/versions/node/v10.16.0/bin/npm
Browsers:
Chrome: 76.0.3809.132
Firefox: 68.0.1
Safari: 12.1.1
npmPackages:
@storybook/react-native: ^5.1.11 => 5.1.11
```
Note that I'm using the following hack to have working hooks:
```
addDecorator(Story => <Story />);
```
| 1.0 | storybook does not handle react-native@0.61.0 new fast refresh mode - **Describe the bug**
Following the release of react-native@0.61.0-rc.0, legacy "hot module reloading" and "live reloading" have been merged in a single new "fast refresh" mode to hot reload components (more details in this [blog post](https://medium.com/@jonnyburger/first-look-react-native-0-61-with-fast-reloading-ad387e502e6f)).
Unfortunately, this new mode does not work with storybook as I get ` WARN Story with id button--default-view already exists in the store!` errors when fast refresh kicks in.
Since livereload isn't an option anymore, it makes developing components with storybook highly impractical as you don't have any refresh anymore.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new react-native project with `react-native init AwesomeProject --version react-native@next`
2. Install storybook, and save/update a story
3. See warning, no refresh
**Expected behavior**
Working fast refresh while editing a story
**System:**
```
Environment Info:
System:
OS: macOS 10.14.5
CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Binaries:
Node: 10.16.0 - ~/.nvm/versions/node/v10.16.0/bin/node
Yarn: 1.17.3 - /usr/local/bin/yarn
npm: 6.9.0 - ~/.nvm/versions/node/v10.16.0/bin/npm
Browsers:
Chrome: 76.0.3809.132
Firefox: 68.0.1
Safari: 12.1.1
npmPackages:
@storybook/react-native: ^5.1.11 => 5.1.11
```
Note that I'm using the following hack to have working hooks:
```
addDecorator(Story => <Story />);
```
| priority | storybook does not handle react native new fast refresh mode describe the bug following the release of react native rc legacy hot module reloading and live reloading have been merged in a single new fast refresh mode to hot reload components more details in this unfortunately this new mode does not work with storybook as i get warn story with id button default view already exists in the store errors when fast refresh kicks in since livereload isn t an option anymore it makes developing components with storybook highly impractical as you don t have any refresh anymore to reproduce steps to reproduce the behavior create a new react native project with react native init awesomeproject version react native next install storybook and save update a story see warning no refresh expected behavior working fast refresh while editing a story system environment info system os macos cpu intel r core tm cpu binaries node nvm versions node bin node yarn usr local bin yarn npm nvm versions node bin npm browsers chrome firefox safari npmpackages storybook react native note that i m using the following hack to have working hooks adddecorator story | 1 |
582,391 | 17,360,675,789 | IssuesEvent | 2021-07-29 20:08:09 | factly/kavach | https://api.github.com/repos/factly/kavach | closed | Applications UI is confusing, make it consistent to the rest of the entities | priority:high studio | Applications entity is not consistent with the rest of the UI in Dega or other applications. Make the following changes:
- [x] Clicking on Edit should take to the application page where `Tokens` should show up as well. Currently, you click on the name of the application to get to the tokens
- [x] Breadcrumb is just showing `applications` regardless of the application it is in. When you click on `Edit`, the breadcrumb should be `Applications/<application-name>`
- [x] Change the label from `New api key` to `New API Token`
- [x] Change the dialog title from `Generate Api key` to `Create New API Token`
- [x] Delete the `return` button in the API creation dialog. It is redundant. Users can close the dialog by `X` on the top.
- [x] Change the button label from `Submit` to `Create API Token` and move the button to the bottom of the dialog.
- [x] Change the description to multiline input | 1.0 | Applications UI is confusing, make it consistent to the rest of the entities - Applications entity is not consistent with the rest of the UI in Dega or other applications. Make the following changes:
- [x] Clicking on Edit should take to the application page where `Tokens` should show up as well. Currently, you click on the name of the application to get to the tokens
- [x] Breadcrumb is just showing `applications` regardless of the application it is in. When you click on `Edit`, the breadcrumb should be `Applications/<application-name>`
- [x] Change the label from `New api key` to `New API Token`
- [x] Change the dialog title from `Generate Api key` to `Create New API Token`
- [x] Delete the `return` button in the API creation dialog. It is redundant. Users can close the dialog by `X` on the top.
- [x] Change the button label from `Submit` to `Create API Token` and move the button to the bottom of the dialog.
- [x] Change the description to multiline input | priority | applications ui is confusing make it consistent to the rest of the entities applications entity is not consistent with the rest of the ui in dega or other applications make the following changes clicking on edit should take to the application page where tokens should show up as well currently you click on the name of the application to get to the tokens breadcrumb is just showing applications regardless of the application it is in when you click on edit the breadcrumb should be applications change the label from new api key to new api token change the dialog title from generate api key to create new api token delete the return button in the api creation dialog it is redundant users can close the dialog by x on the top change the button label from submit to create api token and move the button to the bottom of the dialog change the description to multiline input | 1 |
617,264 | 19,346,529,755 | IssuesEvent | 2021-12-15 11:24:15 | numba/numba | https://api.github.com/repos/numba/numba | closed | Generalize type inference to more kinds of analysis | Numba1.0 highpriority feature_request | Need way to store and query more kinds of code analyses beyond type inference.
Examples: Does this function ever acquire the GIL?
| 1.0 | Generalize type inference to more kinds of analysis - Need way to store and query more kinds of code analyses beyond type inference.
Examples: Does this function ever acquire the GIL?
| priority | generalize type inference to more kinds of analysis need way to store and query more kinds of code analyses beyond type inference examples does this function ever acquire the gil | 1 |
121,789 | 4,821,341,831 | IssuesEvent | 2016-11-05 08:55:04 | CS2103AUG2016-W15-C4/main | https://api.github.com/repos/CS2103AUG2016-W15-C4/main | closed | Bug in redo command | priority.high type.bug | When there is nothing to redo and a redo command is entered, it deletes a task away.
| 1.0 | Bug in redo command - When there is nothing to redo and a redo command is entered, it deletes a task away.
| priority | bug in redo command when there is nothing to redo and a redo command is entered it deletes a task away | 1 |
604,198 | 18,679,272,667 | IssuesEvent | 2021-11-01 01:51:58 | AY2122S1-CS2103T-W08-1/tp | https://api.github.com/repos/AY2122S1-CS2103T-W08-1/tp | closed | [PE-D] People in a subgroup is not counted as member of the group? | bug priority.high | No details provided by bug reporter.
<!--session: 1635494321030-00732807-a6cf-4dff-abad-287417811d6a-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: bernarduskrishna/ped#14 | 1.0 | [PE-D] People in a subgroup is not counted as member of the group? - No details provided by bug reporter.
<!--session: 1635494321030-00732807-a6cf-4dff-abad-287417811d6a-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: bernarduskrishna/ped#14 | priority | people in a subgroup is not counted as member of the group no details provided by bug reporter labels severity medium type featureflaw original bernarduskrishna ped | 1 |
772,025 | 27,102,047,192 | IssuesEvent | 2023-02-15 09:23:35 | NewcastleRSE/orqa_image_grading_client | https://api.github.com/repos/NewcastleRSE/orqa_image_grading_client | closed | Question labels not showing up on mobile | high priority | **Describe the bug**
On the organ marking pages, the sliders and buttons and drop-downs show up but the labels explaining what these represent are not visible.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the app on a mobile browser
2. Log in
**Expected behavior**
Labels should be visible.
**Smartphone (please complete the following information):**
- Device: Moto G7 Power
- OS: Android
- Browser DuckDuckGo
**Additional context**
Add any other context about the problem here.
| 1.0 | Question labels not showing up on mobile - **Describe the bug**
On the organ marking pages, the sliders and buttons and drop-downs show up but the labels explaining what these represent are not visible.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the app on a mobile browser
2. Log in
**Expected behavior**
Labels should be visible.
**Smartphone (please complete the following information):**
- Device: Moto G7 Power
- OS: Android
- Browser DuckDuckGo
**Additional context**
Add any other context about the problem here.
| priority | question labels not showing up on mobile describe the bug on the organ marking pages the sliders and buttons and drop downs show up but the labels explaining what these represent are not visible to reproduce steps to reproduce the behavior go to the app on a mobile browser log in expected behavior labels should be visible smartphone please complete the following information device moto power os android browser duckduckgo additional context add any other context about the problem here | 1 |
692,289 | 23,728,461,535 | IssuesEvent | 2022-08-30 22:14:00 | curiouslearning/FeedTheMonsterH5P | https://api.github.com/repos/curiouslearning/FeedTheMonsterH5P | closed | The game pauses when user tries to replay the level after the stones appear. | High Priority | **Describe the bug**
When the user tries to replay the level after the stones appear automatically the game pauses.
**To Reproduce**
Steps to reproduce the behavior:
1. Start the Apache and MySQL server and open the localhost.
2. In the Curious Learning content go to click on play button and select a mode.
3. Select a level and wait till the stones appear then click on pause button and replay.
**Expected behavior**
The game should not pause when the user tries to replay the level.
**Screenshots**

**Desktop (please complete the following information):**
• OS: Windows 10
• Browser: Chrome
• Version: N/A
**Additional context**
N/A (edited) | 1.0 | The game pauses when user tries to replay the level after the stones appear. - **Describe the bug**
When the user tries to replay the level after the stones appear automatically the game pauses.
**To Reproduce**
Steps to reproduce the behavior:
1. Start the Apache and MySQL server and open the localhost.
2. In the Curious Learning content go to click on play button and select a mode.
3. Select a level and wait till the stones appear then click on pause button and replay.
**Expected behavior**
The game should not pause when the user tries to replay the level.
**Screenshots**

**Desktop (please complete the following information):**
• OS: Windows 10
• Browser: Chrome
• Version: N/A
**Additional context**
N/A (edited) | priority | the game pauses when user tries to replay the level after the stones appear describe the bug when the user tries to replay the level after the stones appear automatically the game pauses to reproduce steps to reproduce the behavior start the apache and mysql server and open the localhost in the curious learning content go to click on play button and select a mode select a level and wait till the stones appear then click on pause button and replay expected behavior the game should not pause when the user tries to replay the level screenshots desktop please complete the following information • os windows • browser chrome • version n a additional context n a edited | 1 |
751,055 | 26,229,010,850 | IssuesEvent | 2023-01-04 21:42:13 | hoffstadt/DearPyGui | https://api.github.com/repos/hoffstadt/DearPyGui | closed | Save style settings in show_style_editor | type: feature priority: high | DPG 1.3.1
`Show_style_editor` used to have functionality to copy the style settings to clipboard. This functionality was removed as part of the theme system overhaul towards version 1.0. Could this functionality be put back in, either as a copy to clipboard and/or save to theme file? | 1.0 | Save style settings in show_style_editor - DPG 1.3.1
`Show_style_editor` used to have functionality to copy the style settings to clipboard. This functionality was removed as part of the theme system overhaul towards version 1.0. Could this functionality be put back in, either as a copy to clipboard and/or save to theme file? | priority | save style settings in show style editor dpg show style editor used to have functionality to copy the style settings to clipboard this functionality was removed as part of the theme system overhaul towards version could this functionality be put back in either as a copy to clipboard and or save to theme file | 1 |
495,660 | 14,285,922,745 | IssuesEvent | 2020-11-23 14:32:21 | Humorloos/semantic_web_project | https://api.github.com/repos/Humorloos/semantic_web_project | opened | App does not update the suggestions when adding a new one | bug priority high | When the user adds a proposed topic to their list of accepted topics, the app does not reload the suggestions. | 1.0 | App does not update the suggestions when adding a new one - When the user adds a proposed topic to their list of accepted topics, the app does not reload the suggestions. | priority | app does not update the suggestions when adding a new one when the user adds a proposed topic to their list of accepted topics the app does not reload the suggestions | 1 |
596,497 | 18,104,760,122 | IssuesEvent | 2021-09-22 17:56:28 | os-climate/os_c_data_commons | https://api.github.com/repos/os-climate/os_c_data_commons | closed | OSC-Platform-013: Running GPU workloads | high priority | As a developer on the corporate data pipeline, I need to access GPU to run the NLP code | 1.0 | OSC-Platform-013: Running GPU workloads - As a developer on the corporate data pipeline, I need to access GPU to run the NLP code | priority | osc platform running gpu workloads as a developer on the corporate data pipeline i need to access gpu to run the nlp code | 1 |
133,275 | 5,200,085,964 | IssuesEvent | 2017-01-23 22:42:24 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | amp-form submission breaks in several browsers, including Edge 13, 14, IE 11, Safari 10 | Priority: High Type: Production Bug | amp-form submission breaks in several browsers.
## How do we reproduce the issue?
Problem noticed on a private website, nevertheless it also exists on other amp-form powered websites, most prominently in the examples provided by the Project itself:
https://ampbyexample.com/components/amp-form/preview/
1. Submit the form in the bottom of the page.
2. Observe an error being thrown in the console:
```
InvalidStateError
v0.js (314,433)
```
## What browsers are affected?
At least Edge 13, 14, IE 11, Safari 10.
Latest Chrome, Firefox, Opera are not affected.
## Which AMP version is affected?
Never tested before, can't say if it's a new issue.
Version 1484769855829 | 1.0 | amp-form submission breaks in several browsers, including Edge 13, 14, IE 11, Safari 10 - amp-form submission breaks in several browsers.
## How do we reproduce the issue?
Problem noticed on a private website, nevertheless it also exists on other amp-form powered websites, most prominently in the examples provided by the Project itself:
https://ampbyexample.com/components/amp-form/preview/
1. Submit the form in the bottom of the page.
2. Observe an error being thrown in the console:
```
InvalidStateError
v0.js (314,433)
```
## What browsers are affected?
At least Edge 13, 14, IE 11, Safari 10.
Latest Chrome, Firefox, Opera are not affected.
## Which AMP version is affected?
Never tested before, can't say if it's a new issue.
Version 1484769855829 | priority | amp form submission breaks in several browsers including edge ie safari amp form submission breaks in several browsers how do we reproduce the issue problem noticed on a private website nevertheless it also exists on other amp form powered websites most prominently in the examples provided by the project itself submit the form in the bottom of the page observe an error being thrown in the console invalidstateerror js what browsers are affected at least edge ie safari latest chrome firefox opera are not affected which amp version is affected never tested before can t say if it s a new issue version | 1 |
646,607 | 21,053,897,364 | IssuesEvent | 2022-03-31 23:45:12 | canonical-web-and-design/maas-ui | https://api.github.com/repos/canonical-web-and-design/maas-ui | closed | Commission and Test forms unable to submit with empty tag selector field | Priority: High Bug 🐛 | You can no longer submit the machine commission or test forms without entering a value into the tag selector component, even though this value is only used to filter the list and is not part of the form values itself.

| 1.0 | Commission and Test forms unable to submit with empty tag selector field - You can no longer submit the machine commission or test forms without entering a value into the tag selector component, even though this value is only used to filter the list and is not part of the form values itself.

| priority | commission and test forms unable to submit with empty tag selector field you can no longer submit the machine commission or test forms without entering a value into the tag selector component even though this value is only used to filter the list and is not part of the form values itself | 1 |
343,041 | 10,324,743,513 | IssuesEvent | 2019-09-01 11:55:53 | OpenSRP/opensrp-client-chw-anc | https://api.github.com/repos/OpenSRP/opensrp-client-chw-anc | closed | Fix ANC register list page after registering a new pregnant woman | High Priority bug | When registering a pregnant woman, the ANC register list changes, i.e. a new QR code button appears in the bottom navigation menu, all demographic info disappears for all women, the due filter button is replaced with another button. Have shown this to Pauline and Ronald.
When you revisit the ANC register page, it appears normally again.
@paulinembabu Assigning to you, since you indicated you might know what's happening here.
This is the 23 August Togo release.
| 1.0 | Fix ANC register list page after registering a new pregnant woman - When registering a pregnant woman, the ANC register list changes, i.e. a new QR code button appears in the bottom navigation menu, all demographic info disappears for all women, the due filter button is replaced with another button. Have shown this to Pauline and Ronald.
When you revisit the ANC register page, it appears normally again.
@paulinembabu Assigning to you, since you indicated you might know what's happening here.
This is the 23 August Togo release.
| priority | fix anc register list page after registering a new pregnant woman when registering a pregnant woman the anc register list changes i e a new qr code button appears in the bottom navigation menu all demographic info disappears for all women the due filter button is replaced with another button have shown this to pauline and ronald when you revisit the anc register page it appears normally again paulinembabu assigning to you since you indicated you might know what s happening here this is the august togo release | 1 |
726,428 | 24,999,203,231 | IssuesEvent | 2022-11-03 05:41:23 | AY2223S1-CS2103T-W15-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-W15-1/tp | closed | [PE-D][Tester E] Inconsistency in command format in the UG and the application | bug priority.High | 

I propose that only one is followed.
<!--session: 1666943795956-267eabbd-e37c-456e-a489-403eb3db760b--><!--Version: Web v3.4.4-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: yixiann/ped#4 | 1.0 | [PE-D][Tester E] Inconsistency in command format in the UG and the application - 

I propose that only one is followed.
<!--session: 1666943795956-267eabbd-e37c-456e-a489-403eb3db760b--><!--Version: Web v3.4.4-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: yixiann/ped#4 | priority | inconsistency in command format in the ug and the application i propose that only one is followed labels severity low type documentationbug original yixiann ped | 1 |
101,710 | 4,128,820,940 | IssuesEvent | 2016-06-10 08:28:17 | navacohen90/Click-a-Table | https://api.github.com/repos/navacohen90/Click-a-Table | closed | Iter 1 - menu view and courses order | 3 - Done general task priority1 - HIGH | ## Iter 1
#### Iteration page: [here](https://github.com/navacohen90/Click-a-Table/wiki/Iter-1---MVP)
##### Related User Stories:
US4
#### Checklist, e.g.,:
- [x] Feature scenarios/tests passing
- [x] Iteration page updated, including:
- [x] Iteration retrospective
- [x] Client review
- [x] Issues updates
- [x] Section on application of course materials
- [x] git tag
- [x] Next iteration:
- [x] Open page
- [x] Select stories and plan issues
- [x] Test scenarios
- [x] All engineers filled peer-review
- [x] Submitted
- [x] Announcement in chat room
- [x] Assign this issue to checker
- [x] Register for a review meeting
<!---
@huboard:{"order":0.0009765625,"milestone_order":7,"custom_state":""}
-->
| 1.0 | Iter 1 - menu view and courses order - ## Iter 1
#### Iteration page: [here](https://github.com/navacohen90/Click-a-Table/wiki/Iter-1---MVP)
##### Related User Stories:
US4
#### Checklist, e.g.,:
- [x] Feature scenarios/tests passing
- [x] Iteration page updated, including:
- [x] Iteration retrospective
- [x] Client review
- [x] Issues updates
- [x] Section on application of course materials
- [x] git tag
- [x] Next iteration:
- [x] Open page
- [x] Select stories and plan issues
- [x] Test scenarios
- [x] All engineers filled peer-review
- [x] Submitted
- [x] Announcement in chat room
- [x] Assign this issue to checker
- [x] Register for a review meeting
<!---
@huboard:{"order":0.0009765625,"milestone_order":7,"custom_state":""}
-->
| priority | iter menu view and courses order iter iteration page related user stories checklist e g feature scenarios tests passing iteration page updated including iteration retrospective client review issues updates section on application of course materials git tag next iteration open page select stories and plan issues test scenarios all engineers filled peer review submitted announcement in chat room assign this issue to checker register for a review meeting huboard order milestone order custom state | 1 |
772,308 | 27,115,867,001 | IssuesEvent | 2023-02-15 18:31:09 | DSpace/dspace-angular | https://api.github.com/repos/DSpace/dspace-angular | closed | Edit bitstream in submission requires from/until date at all times | bug component: submission high priority | **Describe the bug**
When editing a bitstream in the submission UI, the "From" and "Until" date fields are flagged as required. It's impossible to submit the form without filling them out.

This means its not currently possible to edit just the "Description". It's also not possible to set the "Access condition Type" without filling out **both** date fields.
**To Reproduce**
Steps to reproduce the behavior:
1. Edit Bitstream in a submission.
2. Note that it is impossible to submit the form until you fill out both Date fields
**Expected behavior**
* "Grant access from" should only be required for embargo condition type. Otherwise it should be grayed out or not required.
* "Grant access until" should only be required for lease condition type. Otherwise, it should be grayed out or not required.
| 1.0 | Edit bitstream in submission requires from/until date at all times - **Describe the bug**
When editing a bitstream in the submission UI, the "From" and "Until" date fields are flagged as required. It's impossible to submit the form without filling them out.

This means its not currently possible to edit just the "Description". It's also not possible to set the "Access condition Type" without filling out **both** date fields.
**To Reproduce**
Steps to reproduce the behavior:
1. Edit Bitstream in a submission.
2. Note that it is impossible to submit the form until you fill out both Date fields
**Expected behavior**
* "Grant access from" should only be required for embargo condition type. Otherwise it should be grayed out or not required.
* "Grant access until" should only be required for lease condition type. Otherwise, it should be grayed out or not required.
| priority | edit bitstream in submission requires from until date at all times describe the bug when editing a bitstream in the submission ui the from and until date fields are flagged as required it s impossible to submit the form without filling them out this means its not currently possible to edit just the description it s also not possible to set the access condition type without filling out both date fields to reproduce steps to reproduce the behavior edit bitstream in a submission note that it is impossible to submit the form until you fill out both date fields expected behavior grant access from should only be required for embargo condition type otherwise it should be grayed out or not required grant access until should only be required for lease condition type otherwise it should be grayed out or not required | 1 |
690,351 | 23,654,746,412 | IssuesEvent | 2022-08-26 10:05:21 | projectdiscovery/nuclei | https://api.github.com/repos/projectdiscovery/nuclei | closed | panic: runtime error: invalid memory address or nil pointer dereference | Priority: High Status: Completed Type: Bug | ### Error starting Nuclei:
I am getting error when starting Nuclei.
I use Nuclei in version 2.7.6 and golang in version 1.17.
```
[ERR] Could not update templates: could not read configuration file: EOF
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x10a5335]
goroutine 1 [running]:
github.com/projectdiscovery/nuclei/v2/pkg/catalog/loader.NewConfig(...)
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/pkg/catalog/loader/loader.go:79
github.com/projectdiscovery/nuclei/v2/internal/runner.(*Runner).RunEnumeration(0xc00067d810)
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/internal/runner/runner.go:380 +0xe75
main.main()
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/cmd/nuclei/main.go:90 +0x75d
```
```console
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.7.6
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
Version: 2.7.6
Operative System: linux
Architecture: amd64
Go Version: go1.18.5
Compiler: gc
File "/home/user/.config/nuclei/config.yaml" Read => Ok
File "/home/user/.config/nuclei/config.yaml" Write => Ok
File "/home/user/.config/nuclei/.nuclei-ignore" Read => Ok
File "/home/user/.config/nuclei/.nuclei-ignore" Write => Ok
File "/home/user/nuclei-templates/.checksum" Read => Ko (open /home/user/nuclei-templates/.checksum: no such file or directory)
File "/home/user/nuclei-templates/.checksum" Write => Ko (open /home/user/nuclei-templates/.checksum: no such file or directory)
IPv4 connectivity to scanme.sh:80 => Ok
IPv6 connectivity to scanme.sh:80 => Ko (dial tcp6 [2400:6180:0:d0::91:1001]:80: connect: network is unreachable)
IPv4 UDP connectivity to scanme.sh:53 => Ok
``` | 1.0 | panic: runtime error: invalid memory address or nil pointer dereference - ### Error starting Nuclei:
I am getting error when starting Nuclei.
I use Nuclei in version 2.7.6 and golang in version 1.17.
```
[ERR] Could not update templates: could not read configuration file: EOF
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x10a5335]
goroutine 1 [running]:
github.com/projectdiscovery/nuclei/v2/pkg/catalog/loader.NewConfig(...)
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/pkg/catalog/loader/loader.go:79
github.com/projectdiscovery/nuclei/v2/internal/runner.(*Runner).RunEnumeration(0xc00067d810)
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/internal/runner/runner.go:380 +0xe75
main.main()
/root/go/pkg/mod/github.com/projectdiscovery/nuclei/v2@v2.7.6/cmd/nuclei/main.go:90 +0x75d
```
```console
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.7.6
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
Version: 2.7.6
Operative System: linux
Architecture: amd64
Go Version: go1.18.5
Compiler: gc
File "/home/user/.config/nuclei/config.yaml" Read => Ok
File "/home/user/.config/nuclei/config.yaml" Write => Ok
File "/home/user/.config/nuclei/.nuclei-ignore" Read => Ok
File "/home/user/.config/nuclei/.nuclei-ignore" Write => Ok
File "/home/user/nuclei-templates/.checksum" Read => Ko (open /home/user/nuclei-templates/.checksum: no such file or directory)
File "/home/user/nuclei-templates/.checksum" Write => Ko (open /home/user/nuclei-templates/.checksum: no such file or directory)
IPv4 connectivity to scanme.sh:80 => Ok
IPv6 connectivity to scanme.sh:80 => Ko (dial tcp6 [2400:6180:0:d0::91:1001]:80: connect: network is unreachable)
IPv4 UDP connectivity to scanme.sh:53 => Ok
``` | priority | panic runtime error invalid memory address or nil pointer dereference error starting nuclei i am getting error when starting nuclei i use nuclei in version and golang in version could not update templates could not read configuration file eof panic runtime error invalid memory address or nil pointer dereference goroutine github com projectdiscovery nuclei pkg catalog loader newconfig root go pkg mod github com projectdiscovery nuclei pkg catalog loader loader go github com projectdiscovery nuclei internal runner runner runenumeration root go pkg mod github com projectdiscovery nuclei internal runner runner go main main root go pkg mod github com projectdiscovery nuclei cmd nuclei main go console projectdiscovery io use with caution you are responsible for your actions developers assume no liability and are not responsible for any misuse or damage version operative system linux architecture go version compiler gc file home user config nuclei config yaml read ok file home user config nuclei config yaml write ok file home user config nuclei nuclei ignore read ok file home user config nuclei nuclei ignore write ok file home user nuclei templates checksum read ko open home user nuclei templates checksum no such file or directory file home user nuclei templates checksum write ko open home user nuclei templates checksum no such file or directory connectivity to scanme sh ok connectivity to scanme sh ko dial connect network is unreachable udp connectivity to scanme sh ok | 1 |
554,346 | 16,418,464,890 | IssuesEvent | 2021-05-19 09:37:49 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Lightbox Issue: The main tag for the "amp-lightbox-gallery extension script" tag is "body" | Urgent [Priority: HIGH] bug | User having an issue with The main tag for the "amp-lightbox-gallery extension script" tag is "body"
Website: https://canovaonline.com/listing/immobile-rif-2272-due-monolocali-in-vendita-a-borgomanero-no/?amp
Reference: https://secure.helpscout.net/conversation/1506233655/195164?folderId=4195936
| 1.0 | Lightbox Issue: The main tag for the "amp-lightbox-gallery extension script" tag is "body" - User having an issue with The main tag for the "amp-lightbox-gallery extension script" tag is "body"
Website: https://canovaonline.com/listing/immobile-rif-2272-due-monolocali-in-vendita-a-borgomanero-no/?amp
Reference: https://secure.helpscout.net/conversation/1506233655/195164?folderId=4195936
| priority | lightbox issue the main tag for the amp lightbox gallery extension script tag is body user having an issue with the main tag for the amp lightbox gallery extension script tag is body website reference | 1 |
762,645 | 26,725,995,762 | IssuesEvent | 2023-01-29 18:16:05 | scprogramming/Olive | https://api.github.com/repos/scprogramming/Olive | closed | [Courses] Configuration of drip and access to content | Analysis High priority Courses | Allow for drip of content as well as possible access to content | 1.0 | [Courses] Configuration of drip and access to content - Allow for drip of content as well as possible access to content | priority | configuration of drip and access to content allow for drip of content as well as possible access to content | 1 |
479,665 | 13,804,365,409 | IssuesEvent | 2020-10-11 08:36:13 | AY2021S1-CS2103T-W15-2/tp | https://api.github.com/repos/AY2021S1-CS2103T-W15-2/tp | closed | As a user, I can add my students' details | priority.High type.Story | ... So that I can store them and retrieve them whenever I need. | 1.0 | As a user, I can add my students' details - ... So that I can store them and retrieve them whenever I need. | priority | as a user i can add my students details so that i can store them and retrieve them whenever i need | 1 |
372,232 | 11,011,299,399 | IssuesEvent | 2019-12-04 16:03:34 | wherebyus/general-tasks | https://api.github.com/repos/wherebyus/general-tasks | opened | When a client books a topline promo, they should only be allowed to book one per day | Priority: High Product: Promos UX: Validated | ## Feature or problem
In the promo app for Pittsburgh, Idelic was able to book three topline promos (the system should only allow them to book one per day) on 12/6, 12/10, and 12/20. This is the same issue we had last month, only this time all the promo spots are from the same buyer.
## UX Validation
Validated
### Suggested priority
High
### Stakeholders
*Submitted:* kevin
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 1
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
| 1.0 | When a client books a topline promo, they should only be allowed to book one per day - ## Feature or problem
In the promo app for Pittsburgh, Idelic was able to book three topline promos (the system should only allow them to book one per day) on 12/6, 12/10, and 12/20. This is the same issue we had last month, only this time all the promo spots are from the same buyer.
## UX Validation
Validated
### Suggested priority
High
### Stakeholders
*Submitted:* kevin
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 1
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
| priority | when a client books a topline promo they should only be allowed to book one per day feature or problem in the promo app for pittsburgh idelic was able to book three topline promos the system should only allow them to book one per day on and this is the same issue we had last month only this time all the promo spots are from the same buyer ux validation validated suggested priority high stakeholders submitted kevin definition of done how will we know when this feature is complete subtasks a detailed list of changes that need to be made or subtasks one checkbox per brew the coffee developer estimate to help the team accurately estimate the complexity of this task take a moment to walk through this list and estimate each item at the end you can total the estimates and round to the nearest prime number if any of these are at a or higher or if the total is above a consider breaking this issue into multiple smaller issues changes to the database changes to the api testing changes to the api changes to application code adding or updating unit tests local developer testing total developer estimate additional estimate code review qa testing stakeholder sign off deploy to production total additional estimate qa notes detailed instructions for testing one checkbox per test to be completed contextual tests accessibility check cross browser check edge chrome firefox responsive check | 1 |
496,427 | 14,346,959,770 | IssuesEvent | 2020-11-29 04:02:47 | okTurtles/group-income-simple | https://api.github.com/repos/okTurtles/group-income-simple | opened | Mobile UI issue: can't see bottom bar in navigation sidebar | App:Frontend Kind:Bug Note:UI/UX Note:Up-for-grabs Priority:High | ### Problem
In Mobile Safari I wasn't able to get it to scroll down enough so that I could see my user icon and the user settings wheel button:

### Solution
Fix. | 1.0 | Mobile UI issue: can't see bottom bar in navigation sidebar - ### Problem
In Mobile Safari I wasn't able to get it to scroll down enough so that I could see my user icon and the user settings wheel button:

### Solution
Fix. | priority | mobile ui issue can t see bottom bar in navigation sidebar problem in mobile safari i wasn t able to get it to scroll down enough so that i could see my user icon and the user settings wheel button solution fix | 1 |
11,311 | 2,610,111,292 | IssuesEvent | 2015-02-26 18:34:26 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | Changing Post Time | auto-migrated Milestone-1.0b1 Priority-High Type-Enhancement | ```
What new feature do you want?
As summary, it's a simple but useful function.
```
-----
Original issue reported on code.google.com by `darren8...@gmail.com` on 14 Apr 2010 at 5:21 | 1.0 | Changing Post Time - ```
What new feature do you want?
As summary, it's a simple but useful function.
```
-----
Original issue reported on code.google.com by `darren8...@gmail.com` on 14 Apr 2010 at 5:21 | priority | changing post time what new feature do you want as summary it s a simple but useful function original issue reported on code google com by gmail com on apr at | 1 |
638,823 | 20,739,258,087 | IssuesEvent | 2022-03-14 16:12:23 | Azure/static-web-apps-cli | https://api.github.com/repos/Azure/static-web-apps-cli | closed | staticwebapp.config.json including in-the-middle wildcard * works despite such wildcard placement not being allowed on Azure Static Web Apps | type: bug good first issue priority: high (P0) scope: rules engine | Before filing this issue, please ensure you're using the latest CLI by running `swa --version` and comparing to the
latest version on [npm](https://www.npmjs.com/package/@azure/static-web-apps-cli).
```
$: swa --version
0.8.2
```
**Are you accessing the CLI from the default port `:4280` ?**
- [ ] No, I am using a different port number (--port) and accessing the CLI from that port
- [x] Yes, I am accessing the CLI from port `:4280`
> Make sure you are accessing the URL printed in the console when running!
**Describe the bug**
The CLI respects wildcards in `staticwebapp.config.json` even if such configuration would not be allowed in Azure Static
Web App.
The only two cases of `*` that are allowed
[in the docs](https://docs.microsoft.com/en-us/azure/static-web-apps/configuration#wildcards)
are:
- end of route (e.g. `/something*` or `/something/*`)
- file extension (e.g. `/something/*.png` or `/something/*.{png,jpg,tiff}`)
What is **not allowed** is placing a wildcard in the middle like so:
- `/this_is_public/*/secret_subresource`
However, when running in `@azure/static-web-apps-cli` it is perfectly valid and behaves correctly.
e.g. restricting access to a certain role for every subresource of given pattern works. Having a routes section like
this:
```json
{
//...
"routes": [
{
"route": "/products/*/administer",
"allowedRoles": [
"productsadministrator"
]
}
]
//...
}
```
will allow the anonymous user to access:
- `/products/bikes`
- `/products/bikes/top`
- `/products/cars/by-brand`
and **will prevent such user** from accessing:
- `/products/bikes/administer`
- `/products/cars/administer`
logging in with mocked role `productsadministrator` will again enable visiting those routes.
Publishing the app with such config to actual Azure Static Web Apps fails with the following issue:
```
Encountered an issue while validating staticwebapp.config.json: Found a route not denoting file extensions correctly in the 'route' property. A '.' must be included after the '*' when listing allowed file extensions. Visit https://aka.ms/static-web-apps-configuration for more information.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Clone [this demonstration repository](https://github.com/piotr-lasota/azure-cli-route-wildcards-issue)
2. run `yarn` to install dependencies
3. run `yarn demonstrate` to launch `serve` with HTML files accessible on `http://localhost:3000` and `swa` on
port `4280` targetting `serve` content
4. go to `http://localhost:4280`
5. Use the links to navigate to `First Resource` or `Second Resource` and notice the content is available
6. Navigate to the Restricted Subresource of any one of the two
7. Notice you will be redirected to `Unauthorized`
8. Navigate back to `Home`
9. Navigate to `Log In` and make sure you type in role `secred`
10. Try navigating to the `Restricted Subresource` again and notice you will be successfully presented with it's
content.
This is all achieved via the following `staticwebapp.config.json` file:
```json
{
"routes": [
{
"route": "/*/restricted",
"allowedRoles": [
"secret"
]
},
{
"route": "/login",
"redirect": "/.auth/login/github"
}
],
"responseOverrides": {
"401": {
"redirect": "/401"
},
"404": {
"redirect": "/404"
}
}
}
```
**Expected behavior**
An exception should be thrown when launching `swa` indicating that the config provided is invalid.
**Desktop (please complete the following information):**
- OS: `ArchLinux` running `5.16.5-arch1-1` kernel
- Browser: `Mozilla Firefox`
- Version: `96.0.3`
| 1.0 | staticwebapp.config.json including in-the-middle wildcard * works despite such wildcard placement not being allowed on Azure Static Web Apps - Before filing this issue, please ensure you're using the latest CLI by running `swa --version` and comparing to the
latest version on [npm](https://www.npmjs.com/package/@azure/static-web-apps-cli).
```
$: swa --version
0.8.2
```
**Are you accessing the CLI from the default port `:4280` ?**
- [ ] No, I am using a different port number (--port) and accessing the CLI from that port
- [x] Yes, I am accessing the CLI from port `:4280`
> Make sure you are accessing the URL printed in the console when running!
**Describe the bug**
The CLI respects wildcards in `staticwebapp.config.json` even if such configuration would not be allowed in Azure Static
Web App.
The only two cases of `*` that are allowed
[in the docs](https://docs.microsoft.com/en-us/azure/static-web-apps/configuration#wildcards)
are:
- end of route (e.g. `/something*` or `/something/*`)
- file extension (e.g. `/something/*.png` or `/something/*.{png,jpg,tiff}`)
What is **not allowed** is placing a wildcard in the middle like so:
- `/this_is_public/*/secret_subresource`
However, when running in `@azure/static-web-apps-cli` it is perfectly valid and behaves correctly.
e.g. restricting access to a certain role for every subresource of given pattern works. Having a routes section like
this:
```json
{
//...
"routes": [
{
"route": "/products/*/administer",
"allowedRoles": [
"productsadministrator"
]
}
]
//...
}
```
will allow the anonymous user to access:
- `/products/bikes`
- `/products/bikes/top`
- `/products/cars/by-brand`
and **will prevent such user** from accessing:
- `/products/bikes/administer`
- `/products/cars/administer`
logging in with mocked role `productsadministrator` will again enable visiting those routes.
Publishing the app with such config to actual Azure Static Web Apps fails with the following issue:
```
Encountered an issue while validating staticwebapp.config.json: Found a route not denoting file extensions correctly in the 'route' property. A '.' must be included after the '*' when listing allowed file extensions. Visit https://aka.ms/static-web-apps-configuration for more information.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Clone [this demonstration repository](https://github.com/piotr-lasota/azure-cli-route-wildcards-issue)
2. run `yarn` to install dependencies
3. run `yarn demonstrate` to launch `serve` with HTML files accessible on `http://localhost:3000` and `swa` on
port `4280` targetting `serve` content
4. go to `http://localhost:4280`
5. Use the links to navigate to `First Resource` or `Second Resource` and notice the content is available
6. Navigate to the Restricted Subresource of any one of the two
7. Notice you will be redirected to `Unauthorized`
8. Navigate back to `Home`
9. Navigate to `Log In` and make sure you type in role `secred`
10. Try navigating to the `Restricted Subresource` again and notice you will be successfully presented with it's
content.
This is all achieved via the following `staticwebapp.config.json` file:
```json
{
"routes": [
{
"route": "/*/restricted",
"allowedRoles": [
"secret"
]
},
{
"route": "/login",
"redirect": "/.auth/login/github"
}
],
"responseOverrides": {
"401": {
"redirect": "/401"
},
"404": {
"redirect": "/404"
}
}
}
```
**Expected behavior**
An exception should be thrown when launching `swa` indicating that the config provided is invalid.
**Desktop (please complete the following information):**
- OS: `ArchLinux` running `5.16.5-arch1-1` kernel
- Browser: `Mozilla Firefox`
- Version: `96.0.3`
| priority | staticwebapp config json including in the middle wildcard works despite such wildcard placement not being allowed on azure static web apps before filing this issue please ensure you re using the latest cli by running swa version and comparing to the latest version on swa version are you accessing the cli from the default port no i am using a different port number port and accessing the cli from that port yes i am accessing the cli from port make sure you are accessing the url printed in the console when running describe the bug the cli respects wildcards in staticwebapp config json even if such configuration would not be allowed in azure static web app the only two cases of that are allowed are end of route e g something or something file extension e g something png or something png jpg tiff what is not allowed is placing a wildcard in the middle like so this is public secret subresource however when running in azure static web apps cli it is perfectly valid and behaves correctly e g restricting access to a certain role for every subresource of given pattern works having a routes section like this json routes route products administer allowedroles productsadministrator will allow the anonymous user to access products bikes products bikes top products cars by brand and will prevent such user from accessing products bikes administer products cars administer logging in with mocked role productsadministrator will again enable visiting those routes publishing the app with such config to actual azure static web apps fails with the following issue encountered an issue while validating staticwebapp config json found a route not denoting file extensions correctly in the route property a must be included after the when listing allowed file extensions visit for more information to reproduce steps to reproduce the behavior clone run yarn to install dependencies run yarn demonstrate to launch serve with html files accessible on and swa on port targetting serve content go to use the links to navigate to first resource or second resource and notice the content is available navigate to the restricted subresource of any one of the two notice you will be redirected to unauthorized navigate back to home navigate to log in and make sure you type in role secred try navigating to the restricted subresource again and notice you will be successfully presented with it s content this is all achieved via the following staticwebapp config json file json routes route restricted allowedroles secret route login redirect auth login github responseoverrides redirect redirect expected behavior an exception should be thrown when launching swa indicating that the config provided is invalid desktop please complete the following information os archlinux running kernel browser mozilla firefox version | 1 |
298,893 | 9,202,153,902 | IssuesEvent | 2019-03-07 21:36:10 | alakajam-team/alakajam | https://api.github.com/repos/alakajam-team/alakajam | closed | Rethink "Divisions" selector in games search form | high priority | Currently it's confusing more than anything. Especially since we tried to take in account the alternate "ranked"/"unranked" divisions of the Kajams. | 1.0 | Rethink "Divisions" selector in games search form - Currently it's confusing more than anything. Especially since we tried to take in account the alternate "ranked"/"unranked" divisions of the Kajams. | priority | rethink divisions selector in games search form currently it s confusing more than anything especially since we tried to take in account the alternate ranked unranked divisions of the kajams | 1 |
319,497 | 9,745,107,865 | IssuesEvent | 2019-06-03 08:50:25 | RoboTeamTwente/roboteam_ai | https://api.github.com/repos/RoboTeamTwente/roboteam_ai | closed | Pass needs to be faster in a dynamic situation | enhancement high priority | **Is your feature request related to a problem? Please describe.**
Currently the passCoach waits for the receiver to be ready in its position. This could - and probably will - take too much time in a dynamic situation.
**Describe the solution you'd like**
My solution would be to add a variable in the passCoach, which will determine if the pass should be executed whenever (forcedPass = true for example), or that the passer should wait explicitly till the receiver is really ready (forcedPass = false?)
| 1.0 | Pass needs to be faster in a dynamic situation - **Is your feature request related to a problem? Please describe.**
Currently the passCoach waits for the receiver to be ready in its position. This could - and probably will - take too much time in a dynamic situation.
**Describe the solution you'd like**
My solution would be to add a variable in the passCoach, which will determine if the pass should be executed whenever (forcedPass = true for example), or that the passer should wait explicitly till the receiver is really ready (forcedPass = false?)
| priority | pass needs to be faster in a dynamic situation is your feature request related to a problem please describe currently the passcoach waits for the receiver to be ready in its position this could and probably will take too much time in a dynamic situation describe the solution you d like my solution would be to add a variable in the passcoach which will determine if the pass should be executed whenever forcedpass true for example or that the passer should wait explicitly till the receiver is really ready forcedpass false | 1 |
765,250 | 26,838,992,085 | IssuesEvent | 2023-02-02 22:08:58 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | Ring of Frost causing Impact chain-procs | Class: Mage Spell Priority: High Status: Confirmed | https://www.youtube.com/watch?v=Wyg0KoXH7-w
Not sure what triggers this, but Ring of Frost is not a damaging spell.
 | 1.0 | Ring of Frost causing Impact chain-procs - https://www.youtube.com/watch?v=Wyg0KoXH7-w
Not sure what triggers this, but Ring of Frost is not a damaging spell.
 | priority | ring of frost causing impact chain procs not sure what triggers this but ring of frost is not a damaging spell | 1 |
413,246 | 12,061,744,504 | IssuesEvent | 2020-04-16 00:48:13 | Lev-Echad/levechad-backend | https://api.github.com/repos/Lev-Echad/levechad-backend | opened | Add relation between cities and regions | High Priority enhancement requires-migration | **Related component**: client
## Description
Add region key to cities.
Use the dataset from the national data site:
https://data.gov.il/dataset/citiesandsettelments/resource/d4901968-dad3-4845-a9b0-a57d027f11ab?inner_span=True
First change regions in the file to Northern, Central and Southern
----
### Prerequisites
_[Make sure you've done the following before posting your issue:]_
* [ ] I have filled in the correct labels (fill at least one of bug/enhancement)
* [ ] I have searched for duplicate issues and found none
* [ ] My issue is phrased as a task - it's clear what needs to be done here.
| 1.0 | Add relation between cities and regions - **Related component**: client
## Description
Add region key to cities.
Use the dataset from the national data site:
https://data.gov.il/dataset/citiesandsettelments/resource/d4901968-dad3-4845-a9b0-a57d027f11ab?inner_span=True
First change regions in the file to Northern, Central and Southern
----
### Prerequisites
_[Make sure you've done the following before posting your issue:]_
* [ ] I have filled in the correct labels (fill at least one of bug/enhancement)
* [ ] I have searched for duplicate issues and found none
* [ ] My issue is phrased as a task - it's clear what needs to be done here.
| priority | add relation between cities and regions related component client description add region key to cities use the dataset from the national data site first change regions in the file to northern central and southern prerequisites i have filled in the correct labels fill at least one of bug enhancement i have searched for duplicate issues and found none my issue is phrased as a task it s clear what needs to be done here | 1 |
113,507 | 4,560,878,710 | IssuesEvent | 2016-09-14 09:37:22 | wp-property/wp-avalon | https://api.github.com/repos/wp-property/wp-avalon | closed | when 2 advanced range dropdown used styling issue | bug High priority | 
and the same on default horizontal search - https://squarefeet.ca/properties/search/ | 1.0 | when 2 advanced range dropdown used styling issue - 
and the same on default horizontal search - https://squarefeet.ca/properties/search/ | priority | when advanced range dropdown used styling issue and the same on default horizontal search | 1 |
79,565 | 3,536,304,740 | IssuesEvent | 2016-01-17 05:50:14 | gophish/gophish | https://api.github.com/repos/gophish/gophish | closed | Can't Delete Campaign | bug high-priority | Looks like we forgot to add the logic to delete campaigns. Currently, we only throw a [bogus alert.](https://github.com/gophish/gophish/blob/master/static/js/app/campaigns.js#L107) | 1.0 | Can't Delete Campaign - Looks like we forgot to add the logic to delete campaigns. Currently, we only throw a [bogus alert.](https://github.com/gophish/gophish/blob/master/static/js/app/campaigns.js#L107) | priority | can t delete campaign looks like we forgot to add the logic to delete campaigns currently we only throw a | 1 |
184,072 | 6,700,957,429 | IssuesEvent | 2017-10-11 07:48:43 | Haivision/srt | https://api.github.com/repos/Haivision/srt | closed | TRAVIS: Mac machine is broken | Area: CI Priority: High Status: Pending Type: Bug | Travis now reports errors ALWAYS for Mac, before even cloning the SRT repository. It reports some error around a probably outdated brew framework. | 1.0 | TRAVIS: Mac machine is broken - Travis now reports errors ALWAYS for Mac, before even cloning the SRT repository. It reports some error around a probably outdated brew framework. | priority | travis mac machine is broken travis now reports errors always for mac before even cloning the srt repository it reports some error around a probably outdated brew framework | 1 |
707,425 | 24,306,529,762 | IssuesEvent | 2022-09-29 17:56:35 | cds-snc/notification-planning | https://api.github.com/repos/cds-snc/notification-planning | closed | Properly report multiple SMS sent for billing purposes following a long notification | High Priority | Haute priorité UX Policy l Politique Dev | - [x] Assess our code base.
- [x] Investigate what GDS did.
- [x] Make a decision on how to solve and the effort.
# Story
As a user, I want to know how many SMS messages I sent especially when sending long notifications that can lead to multiple sending of SMS messages.
As a Notify system admin, I want to properly bill clients sending long notifications that result in multiple SMS sending so that I stay within my department's budget.
# What?
When a user sends a notification get is long than 140 single byte encoded char, this will send multiple SMS in blocks of the previously mentioned limit. Notify does not count these properly at the current time and there could be no way to properly bill our clients with their usage.
# Why?
Because we are supposed to bill our clients that get past their SMS limit and Notify could not know if they do when they send long notifications.
# Done When
When we can count SMS messages properly for each long notification that Notify sends.
# Tips
Look into the GDS code base to see if they already addressed this issue and see if we should import their code. | 1.0 | Properly report multiple SMS sent for billing purposes following a long notification - - [x] Assess our code base.
- [x] Investigate what GDS did.
- [x] Make a decision on how to solve and the effort.
# Story
As a user, I want to know how many SMS messages I sent especially when sending long notifications that can lead to multiple sending of SMS messages.
As a Notify system admin, I want to properly bill clients sending long notifications that result in multiple SMS sending so that I stay within my department's budget.
# What?
When a user sends a notification get is long than 140 single byte encoded char, this will send multiple SMS in blocks of the previously mentioned limit. Notify does not count these properly at the current time and there could be no way to properly bill our clients with their usage.
# Why?
Because we are supposed to bill our clients that get past their SMS limit and Notify could not know if they do when they send long notifications.
# Done When
When we can count SMS messages properly for each long notification that Notify sends.
# Tips
Look into the GDS code base to see if they already addressed this issue and see if we should import their code. | priority | properly report multiple sms sent for billing purposes following a long notification assess our code base investigate what gds did make a decision on how to solve and the effort story as a user i want to know how many sms messages i sent especially when sending long notifications that can lead to multiple sending of sms messages as a notify system admin i want to properly bill clients sending long notifications that result in multiple sms sending so that i stay within my department s budget what when a user sends a notification get is long than single byte encoded char this will send multiple sms in blocks of the previously mentioned limit notify does not count these properly at the current time and there could be no way to properly bill our clients with their usage why because we are supposed to bill our clients that get past their sms limit and notify could not know if they do when they send long notifications done when when we can count sms messages properly for each long notification that notify sends tips look into the gds code base to see if they already addressed this issue and see if we should import their code | 1 |
598,518 | 18,246,784,123 | IssuesEvent | 2021-10-01 19:35:08 | tgstation/TerraGov-Marine-Corps | https://api.github.com/repos/tgstation/TerraGov-Marine-Corps | closed | You can fit any grenade in ugl | Bug Priority: High | ### Description
read title
### Test Merges
doesn't matter
### Reproduction Steps
1. get ugl
2. get any grenade, even custom nades
3. put inside ugl
4. why take one-handed gl anymore
### Screenshots

| 1.0 | You can fit any grenade in ugl - ### Description
read title
### Test Merges
doesn't matter
### Reproduction Steps
1. get ugl
2. get any grenade, even custom nades
3. put inside ugl
4. why take one-handed gl anymore
### Screenshots

| priority | you can fit any grenade in ugl description read title test merges doesn t matter reproduction steps get ugl get any grenade even custom nades put inside ugl why take one handed gl anymore screenshots | 1 |
248,947 | 7,947,268,262 | IssuesEvent | 2018-07-11 01:43:14 | OperationCode/operationcode_backend | https://api.github.com/repos/OperationCode/operationcode_backend | closed | Remove verification restriction for users requesting mentors via API | Priority: High | <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
<!-- What problem is it solving? What value does it add? -->
Issue #259 and PR #344 added an endpoint where mentor requests can be sent to the Rails API (which is then forwarding those requests to Airtable).
Currently there is a restriction to only allow verified Id.me users to make these requests, but per @ashtemp and staff, we now want to lift this restriction. Instead of checking Id.me verification, we will simply have a checkbox on the front-end form and take the user's word in good faith that they are a military veteran, active-duty, or milspouse. See [comment](https://github.com/OperationCode/operationcode_frontend/issues/971#issuecomment-398601370) in front-end issue # 971.
## What should your feature do?
Remove verification restriction for users making mentor requests via API. This is currently [being checked](https://github.com/OperationCode/operationcode_backend/blob/5df843a43619738df7900969b59c6edb009966bd/app/controllers/api/v1/airtable/mentorships_controller.rb#L16) in the /airtable/mentorships_controller, and possibly elsewhere.
| 1.0 | Remove verification restriction for users requesting mentors via API - <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
<!-- What problem is it solving? What value does it add? -->
Issue #259 and PR #344 added an endpoint where mentor requests can be sent to the Rails API (which is then forwarding those requests to Airtable).
Currently there is a restriction to only allow verified Id.me users to make these requests, but per @ashtemp and staff, we now want to lift this restriction. Instead of checking Id.me verification, we will simply have a checkbox on the front-end form and take the user's word in good faith that they are a military veteran, active-duty, or milspouse. See [comment](https://github.com/OperationCode/operationcode_frontend/issues/971#issuecomment-398601370) in front-end issue # 971.
## What should your feature do?
Remove verification restriction for users making mentor requests via API. This is currently [being checked](https://github.com/OperationCode/operationcode_backend/blob/5df843a43619738df7900969b59c6edb009966bd/app/controllers/api/v1/airtable/mentorships_controller.rb#L16) in the /airtable/mentorships_controller, and possibly elsewhere.
| priority | remove verification restriction for users requesting mentors via api feature why is this feature being added issue and pr added an endpoint where mentor requests can be sent to the rails api which is then forwarding those requests to airtable currently there is a restriction to only allow verified id me users to make these requests but per ashtemp and staff we now want to lift this restriction instead of checking id me verification we will simply have a checkbox on the front end form and take the user s word in good faith that they are a military veteran active duty or milspouse see in front end issue what should your feature do remove verification restriction for users making mentor requests via api this is currently in the airtable mentorships controller and possibly elsewhere | 1 |
28,734 | 2,711,145,112 | IssuesEvent | 2015-04-09 02:25:18 | cs2103jan2015-t10-2j/main | https://api.github.com/repos/cs2103jan2015-t10-2j/main | opened | make a default view | priority.high type.story | Show the current day,
The deadlines in the coming week,
And a few floating tasks
When the program starts | 1.0 | make a default view - Show the current day,
The deadlines in the coming week,
And a few floating tasks
When the program starts | priority | make a default view show the current day the deadlines in the coming week and a few floating tasks when the program starts | 1 |
28,106 | 2,699,656,428 | IssuesEvent | 2015-04-03 18:47:23 | TrinityCore/TrinityCore | https://api.github.com/repos/TrinityCore/TrinityCore | closed | [434] Phasing system is broken [$20] | bounty Branch-4.3.4 Comp-Core Priority-High | Remember Subv to write here what's broken to fix worgen start zone.
Subv: keep this in mind please, phases without spells, multiple terrain swaps per phase, multiple worldmapswaps with / withouts spells, conditions for phases
Long talk about this:
http://pastebin.com/twFWuum1
<bountysource-plugin>
---
There is a **[$20 open bounty](https://www.bountysource.com/issues/4737693-434-phasing-system-is-broken-20?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github)** on this issue. Add to the bounty at [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | [434] Phasing system is broken [$20] - Remember Subv to write here what's broken to fix worgen start zone.
Subv: keep this in mind please, phases without spells, multiple terrain swaps per phase, multiple worldmapswaps with / withouts spells, conditions for phases
Long talk about this:
http://pastebin.com/twFWuum1
<bountysource-plugin>
---
There is a **[$20 open bounty](https://www.bountysource.com/issues/4737693-434-phasing-system-is-broken-20?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github)** on this issue. Add to the bounty at [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github).
</bountysource-plugin> | priority | phasing system is broken remember subv to write here what s broken to fix worgen start zone subv keep this in mind please phases without spells multiple terrain swaps per phase multiple worldmapswaps with withouts spells conditions for phases long talk about this there is a on this issue add to the bounty at | 1 |
285,516 | 8,761,620,578 | IssuesEvent | 2018-12-16 19:16:18 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Let users know offscreen messages got sent. | area: compose help wanted in progress priority: high | If users are scrolled up in the feed and send a message, they don't actually see the message, since it's "below" the bottom of the window. We should put something above the compose box that tells users the message was sent and is not onscreen. Discussion here:
https://chat.zulip.org/#narrow/stream/101-design/subject/indication.20that.20message.20sent/near/659668
For implementing this, one can probably borrow a lot of logic from how we display similar messages above the compose box, such as in situations where you sent a message outside your narrow, or you change something with a slash command, etc. | 1.0 | Let users know offscreen messages got sent. - If users are scrolled up in the feed and send a message, they don't actually see the message, since it's "below" the bottom of the window. We should put something above the compose box that tells users the message was sent and is not onscreen. Discussion here:
https://chat.zulip.org/#narrow/stream/101-design/subject/indication.20that.20message.20sent/near/659668
For implementing this, one can probably borrow a lot of logic from how we display similar messages above the compose box, such as in situations where you sent a message outside your narrow, or you change something with a slash command, etc. | priority | let users know offscreen messages got sent if users are scrolled up in the feed and send a message they don t actually see the message since it s below the bottom of the window we should put something above the compose box that tells users the message was sent and is not onscreen discussion here for implementing this one can probably borrow a lot of logic from how we display similar messages above the compose box such as in situations where you sent a message outside your narrow or you change something with a slash command etc | 1 |
508,011 | 14,688,288,399 | IssuesEvent | 2021-01-02 01:39:59 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [v1.14] Bluetooth: controller: Fix uninit conn context after invalid channel map | Stale area: Bluetooth bug priority: high | When a connect indication contains a channel map of all zeros, the adv->conn is left NULL'ed after connect establishment is abborted and next connect attempt leads to a crash.
This was identified via the Sweyntooth test suite - executing the script re. issue 6.14 towards an Oticon target.
https://asset-group.github.io/disclosures/sweyntooth/
Zephyr v1.14 needs to be analysed.
See #27507
| 1.0 | [v1.14] Bluetooth: controller: Fix uninit conn context after invalid channel map - When a connect indication contains a channel map of all zeros, the adv->conn is left NULL'ed after connect establishment is abborted and next connect attempt leads to a crash.
This was identified via the Sweyntooth test suite - executing the script re. issue 6.14 towards an Oticon target.
https://asset-group.github.io/disclosures/sweyntooth/
Zephyr v1.14 needs to be analysed.
See #27507
| priority | bluetooth controller fix uninit conn context after invalid channel map when a connect indication contains a channel map of all zeros the adv conn is left null ed after connect establishment is abborted and next connect attempt leads to a crash this was identified via the sweyntooth test suite executing the script re issue towards an oticon target zephyr needs to be analysed see | 1 |
350,848 | 10,509,644,915 | IssuesEvent | 2019-09-27 11:33:05 | wso2/micro-integrator | https://api.github.com/repos/wso2/micro-integrator | closed | [ DSS] Support for Registry Resources. | 1.0.0 DSS Priority/High Severity/Trivial commitment | **Description:**
$subject.
We are not able to define result mapping with xslt, since DS doesn't support file based registry.
```
<result element="Products" rowName="Product" xsltPath="conf:/automation/resources/excel/transform.xslt" defaultNamespace="http://ws.wso2.org/dataservice/samples/excel_sample_service">
<element name="Name" column="Model" xsdType="xs:string" />
<element name="Classification" column="Classification" xsdType="xs:string" />
</result>
```
**Related Issues:**
https://github.com/wso2/micro-integrator/issues/256
https://github.com/wso2/micro-integrator/issues/255
Some test methods in **ExcelDataServiceTestCase** disabled due to this. Enable them when this is fixed. | 1.0 | [ DSS] Support for Registry Resources. - **Description:**
$subject.
We are not able to define result mapping with xslt, since DS doesn't support file based registry.
```
<result element="Products" rowName="Product" xsltPath="conf:/automation/resources/excel/transform.xslt" defaultNamespace="http://ws.wso2.org/dataservice/samples/excel_sample_service">
<element name="Name" column="Model" xsdType="xs:string" />
<element name="Classification" column="Classification" xsdType="xs:string" />
</result>
```
**Related Issues:**
https://github.com/wso2/micro-integrator/issues/256
https://github.com/wso2/micro-integrator/issues/255
Some test methods in **ExcelDataServiceTestCase** disabled due to this. Enable them when this is fixed. | priority | support for registry resources description subject we are not able to define result mapping with xslt since ds doesn t support file based registry result element products rowname product xsltpath conf automation resources excel transform xslt defaultnamespace related issues some test methods in exceldataservicetestcase disabled due to this enable them when this is fixed | 1 |
74,816 | 3,448,831,205 | IssuesEvent | 2015-12-16 10:31:16 | emoncms/MyHomeEnergyPlanner | https://api.github.com/repos/emoncms/MyHomeEnergyPlanner | opened | draught-proofing measures | High priority query | Need to be able to apply particular draught-proofing measures
DECISION: CREATE NEW LIBRARY FOR THIS
AP! CARLOS - WHEN 'CALCULATE BASED ON AIR TIGHTNESS TEST' IS TICKED LIBRARY CAN BE APPLIED TO - AIR PERMEABILITY VALUE | 1.0 | draught-proofing measures - Need to be able to apply particular draught-proofing measures
DECISION: CREATE NEW LIBRARY FOR THIS
AP! CARLOS - WHEN 'CALCULATE BASED ON AIR TIGHTNESS TEST' IS TICKED LIBRARY CAN BE APPLIED TO - AIR PERMEABILITY VALUE | priority | draught proofing measures need to be able to apply particular draught proofing measures decision create new library for this ap carlos when calculate based on air tightness test is ticked library can be applied to air permeability value | 1 |
261,007 | 8,222,587,403 | IssuesEvent | 2018-09-06 07:59:52 | AndrewRedican/react-json-editor-ajrm | https://api.github.com/repos/AndrewRedican/react-json-editor-ajrm | closed | Error line incorrectly reported | bug high priority | 1. version 2.5.3
2. OS: ubuntu
I suppose it wrongly reports line of error. e.g.

You can see above it reports error is on line 17, whereas in reality the error is on line 13, where I typed symbol "r". (If I remove that "r" from line 13 and its corresponding comma then it works fine). | 1.0 | Error line incorrectly reported - 1. version 2.5.3
2. OS: ubuntu
I suppose it wrongly reports line of error. e.g.

You can see above it reports error is on line 17, whereas in reality the error is on line 13, where I typed symbol "r". (If I remove that "r" from line 13 and its corresponding comma then it works fine). | priority | error line incorrectly reported version os ubuntu i suppose it wrongly reports line of error e g you can see above it reports error is on line whereas in reality the error is on line where i typed symbol r if i remove that r from line and its corresponding comma then it works fine | 1 |
473,376 | 13,641,493,531 | IssuesEvent | 2020-09-25 14:14:29 | workcraft/workcraft | https://api.github.com/repos/workcraft/workcraft | opened | Fix binate consensus command | bug priority:high status:confirmed tag:model:circuit | The following issues need to be fixed:
- [ ] Prevent bad symbols in the name of Reach assertion temporary files
- [ ] Handle vacuous case (circuit without binate functions)
- [ ] Report in which variable the function has consensus
| 1.0 | Fix binate consensus command - The following issues need to be fixed:
- [ ] Prevent bad symbols in the name of Reach assertion temporary files
- [ ] Handle vacuous case (circuit without binate functions)
- [ ] Report in which variable the function has consensus
| priority | fix binate consensus command the following issues need to be fixed prevent bad symbols in the name of reach assertion temporary files handle vacuous case circuit without binate functions report in which variable the function has consensus | 1 |
528,524 | 15,368,721,317 | IssuesEvent | 2021-03-02 06:07:57 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | The date format is not correct in the French format | [Priority: HIGH] bug | Ref: https://secure.helpscout.net/conversation/1436471808/182496?folderId=2770545
The % days ago is not converting into the correct French format. In the French, it should be in the format "Il y a % jours". When the user is adding it in the "Text Date Format" section (https://prnt.sc/1086nuu), he is getting the output like this:
https://prnt.sc/1086p4x
The issue is also recreated on the local.
Steps to recreate:
1. In the WordPress general settings, Choose the default language as French.
Screenshot: https://prnt.sc/1086r5v
2. Then goto AMP > Design > Date > Text Date Format > Enter " "Il y a % jours" > Save Changes.
Screenshot: https://prnt.sc/1086u74
| 1.0 | The date format is not correct in the French format - Ref: https://secure.helpscout.net/conversation/1436471808/182496?folderId=2770545
The % days ago is not converting into the correct French format. In the French, it should be in the format "Il y a % jours". When the user is adding it in the "Text Date Format" section (https://prnt.sc/1086nuu), he is getting the output like this:
https://prnt.sc/1086p4x
The issue is also recreated on the local.
Steps to recreate:
1. In the WordPress general settings, Choose the default language as French.
Screenshot: https://prnt.sc/1086r5v
2. Then goto AMP > Design > Date > Text Date Format > Enter " "Il y a % jours" > Save Changes.
Screenshot: https://prnt.sc/1086u74
| priority | the date format is not correct in the french format ref the days ago is not converting into the correct french format in the french it should be in the format il y a jours when the user is adding it in the text date format section he is getting the output like this the issue is also recreated on the local steps to recreate in the wordpress general settings choose the default language as french screenshot then goto amp design date text date format enter il y a jours save changes screenshot | 1 |
459,431 | 13,193,046,224 | IssuesEvent | 2020-08-13 14:39:52 | mypyc/mypyc | https://api.github.com/repos/mypyc/mypyc | closed | Reference counting issue with GetElementPtr | bug priority-0-high | Consider this code:
```py
from typing import List
def f() -> int:
a: List[int] = []
return len(a)
```
It gets compiled into this (incorrect) code:
```
def f():
r0, a :: list
r1 :: ptr
r2 :: int64
r3 :: short_int
r4 :: int
L0:
r0 = []
if is_error(r0) goto L2 (error at f:4) else goto L1
L1:
a = r0
r1 = get_element_ptr a ob_size :: PyVarObject
dec_ref a # <<---- Free here
r2 = load_mem r1 :: int64* # <<---- Use after free
r3 = r2 << 1
return r3
L2:
r4 = <error> :: int
return r4
```
This results in segfaults in some cases, but simple examples may work correctly most of the time. | 1.0 | Reference counting issue with GetElementPtr - Consider this code:
```py
from typing import List
def f() -> int:
a: List[int] = []
return len(a)
```
It gets compiled into this (incorrect) code:
```
def f():
r0, a :: list
r1 :: ptr
r2 :: int64
r3 :: short_int
r4 :: int
L0:
r0 = []
if is_error(r0) goto L2 (error at f:4) else goto L1
L1:
a = r0
r1 = get_element_ptr a ob_size :: PyVarObject
dec_ref a # <<---- Free here
r2 = load_mem r1 :: int64* # <<---- Use after free
r3 = r2 << 1
return r3
L2:
r4 = <error> :: int
return r4
```
This results in segfaults in some cases, but simple examples may work correctly most of the time. | priority | reference counting issue with getelementptr consider this code py from typing import list def f int a list return len a it gets compiled into this incorrect code def f a list ptr short int int if is error goto error at f else goto a get element ptr a ob size pyvarobject dec ref a free here load mem use after free return int return this results in segfaults in some cases but simple examples may work correctly most of the time | 1 |
225,285 | 7,480,354,410 | IssuesEvent | 2018-04-04 17:08:22 | knowmetools/km-api | https://api.github.com/repos/knowmetools/km-api | closed | How to handle data migration | Priority: High Status: Review Needed Type: Discussion | As an addition to #15, there are other types of data that we need to worry about migrating.
### Old Row Types
With the removal of some existing row types in #66, we run into the issue of migrating over rows of those types.
#### Paged Rows
The paged row should be pretty simple to move over as a text row since the rows contain the same information.
#### Grouped Rows
Grouped rows are more complicated since each item has an associated `category`. Our best option is probably creating a separate text row for each category in the grouped row. The row would be named `<grouped row name> - <category>` and would contain all the items in the category. | 1.0 | How to handle data migration - As an addition to #15, there are other types of data that we need to worry about migrating.
### Old Row Types
With the removal of some existing row types in #66, we run into the issue of migrating over rows of those types.
#### Paged Rows
The paged row should be pretty simple to move over as a text row since the rows contain the same information.
#### Grouped Rows
Grouped rows are more complicated since each item has an associated `category`. Our best option is probably creating a separate text row for each category in the grouped row. The row would be named `<grouped row name> - <category>` and would contain all the items in the category. | priority | how to handle data migration as an addition to there are other types of data that we need to worry about migrating old row types with the removal of some existing row types in we run into the issue of migrating over rows of those types paged rows the paged row should be pretty simple to move over as a text row since the rows contain the same information grouped rows grouped rows are more complicated since each item has an associated category our best option is probably creating a separate text row for each category in the grouped row the row would be named and would contain all the items in the category | 1 |
241,036 | 7,808,321,331 | IssuesEvent | 2018-06-11 19:47:24 | otrv4/libotr-ng | https://api.github.com/repos/otrv4/libotr-ng | closed | Generalize the RSig function | OTRv4-basics discuss high-priority | ## Why
As we are currently implementing the revision number 2 of the OTRv4 specification, we need to include a consistent way of using the RSig function.
## Reference
Please, refer to the "Ring Signature Authentication" section of the OTRv4 spec and issue 99 of it.
## Tasks
- [x] Check how RSig and RVerf are working on the auth.c file.
- [x] Check if those functions can be generalized and do so if needed.
- [x] Check if this can be part of otrv4 toolkit.
- [x] Generalize it for the Prekey Server as well.
## Open questions
- Is this part of the otrv4 toolkit? | 1.0 | Generalize the RSig function - ## Why
As we are currently implementing the revision number 2 of the OTRv4 specification, we need to include a consistent way of using the RSig function.
## Reference
Please, refer to the "Ring Signature Authentication" section of the OTRv4 spec and issue 99 of it.
## Tasks
- [x] Check how RSig and RVerf are working on the auth.c file.
- [x] Check if those functions can be generalized and do so if needed.
- [x] Check if this can be part of otrv4 toolkit.
- [x] Generalize it for the Prekey Server as well.
## Open questions
- Is this part of the otrv4 toolkit? | priority | generalize the rsig function why as we are currently implementing the revision number of the specification we need to include a consistent way of using the rsig function reference please refer to the ring signature authentication section of the spec and issue of it tasks check how rsig and rverf are working on the auth c file check if those functions can be generalized and do so if needed check if this can be part of toolkit generalize it for the prekey server as well open questions is this part of the toolkit | 1 |
622,300 | 19,620,555,489 | IssuesEvent | 2022-01-07 05:38:44 | woocommerce/google-listings-and-ads | https://api.github.com/repos/woocommerce/google-listings-and-ads | opened | Uptick in API errors (for products.delete and products.insert) | type: bug priority: high | ### Describe the bug:
We've started seeing an uptick in errors since the end of 2021 in products.delete and products.insert.
As of this week we're now also seeing errors with productstatuses.get and accounts.claimwebsite
The Google team have done some digging and believe it to be a _timing_ or retry issue - it looks to them like duplicate requests are being fired off.
### Expected behavior:
Reduced errors / no duplicate requests.
## Additional Information
* Error rates can be monitored from within our MC account UI
* A list of MC IDs and a slide deck outlining some screenshots is available within our call notes (Search for "Uptick in API errors")
| 1.0 | Uptick in API errors (for products.delete and products.insert) - ### Describe the bug:
We've started seeing an uptick in errors since the end of 2021 in products.delete and products.insert.
As of this week we're now also seeing errors with productstatuses.get and accounts.claimwebsite
The Google team have done some digging and believe it to be a _timing_ or retry issue - it looks to them like duplicate requests are being fired off.
### Expected behavior:
Reduced errors / no duplicate requests.
## Additional Information
* Error rates can be monitored from within our MC account UI
* A list of MC IDs and a slide deck outlining some screenshots is available within our call notes (Search for "Uptick in API errors")
| priority | uptick in api errors for products delete and products insert describe the bug we ve started seeing an uptick in errors since the end of in products delete and products insert as of this week we re now also seeing errors with productstatuses get and accounts claimwebsite the google team have done some digging and believe it to be a timing or retry issue it looks to them like duplicate requests are being fired off expected behavior reduced errors no duplicate requests additional information error rates can be monitored from within our mc account ui a list of mc ids and a slide deck outlining some screenshots is available within our call notes search for uptick in api errors | 1 |
464,329 | 13,310,808,600 | IssuesEvent | 2020-08-26 07:14:21 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Update Dynamic Plans Action templates | Priority: High | The current expressions for various templates are wrong. Update the plan templates used to generate Plans as per below
IRS
```
"action": [
{
"identifier": "b996adc3-574d-5bc0-903d-d00d38252a9f",
"prefix": 1,
"title": "Spray Structures",
"description": "Visit each structure in the operational area and attempt to spray",
"code": "IRS",
"timingPeriod": {
"start": "2020-08-17",
"end": "2020-08-24"
},
"reason": "Routine",
"goalId": "IRS",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is residential or type does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "spray_form.json",
"type": "create"
}
]
```
MDA
```
"action": [
{
"identifier": "79e9ad73-45a0-52b9-82e9-50d21c29bb78",
"prefix": 1,
"title": "Family Registration",
"description": "Register all families & family members in all residential structures enumerated (100%) within the operational area",
"code": "RACD Register Family",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "RACD_register_families",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and family does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists().not())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "family_register.json",
"type": "create"
},
{
"identifier": "69dd5115-a8fc-5b06-b3a1-75b7e303a7f9",
"prefix": 2,
"title": "Distribute Drugs",
"description": "Visit all residential structures (100%) and dispense prophylaxis to each registered person",
"code": "MDA Dispense",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "MDA_Dispense",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration or Family Member Registration event is submitted",
"expression": "questionnaire = 'Family_Registration' or questionnaire = 'Family_Member_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Person or person associated with questionaire response is older than 5 years and younger than 15 years",
"expression": "($this.is(FHIR.Patient) and $this.birthDate <= today() - 5 'years' and $this.birthDate > today() - 15 'years') or ($this.contained.where(Patient.birthDate <= today() - 5 'years' and $this.birthDate > today() - 15 'years').exists())"
}
}
],
"definitionUri": "mda_dispense.json",
"type": "create"
},
{
"identifier": "f4d7243b-4b6d-532c-89dc-6d593ee46988",
"prefix": 3,
"title": "MDA Adherence",
"description": "Visit all residential structures (100%) and confirm adherence of each registered person",
"code": "MDA Adherence",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "MDA_Adherence",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a MDA Dispense event is submitted",
"expression": "questionnaire = 'mda_dispense'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "The person fully received the dispense activity",
"expression": "$this.item.where(linkId='business_status').answer.value = 'Fully Received'"
}
}
],
"definitionUri": "mda_adherence.json",
"type": "create"
}
]
```
FI
```
"action": [
{
"identifier": "5bc2b863-9127-5b5b-a7ee-7a52775576e7",
"prefix": 1,
"title": "Family Registration",
"description": "Register all families & family members in all residential structures enumerated (100%) within the operational area",
"code": "RACD Register Family",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "RACD_register_families",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and Family does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists().not())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "family_register.json",
"type": "create"
},
{
"identifier": "28e7da08-aad7-5c4a-87ad-e955e54bb324",
"prefix": 2,
"title": "Blood Screening",
"description": "Visit all residential structures (100%) within a 1 km radius of a confirmed index case and test each registered person",
"code": "Blood Screening",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "RACD_Blood_Screening",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration or Family Member Registration event is submitted",
"expression": "questionnaire = 'Family_Registration' or questionnaire = 'Family_Member_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Person is older than 5 years or person associated with questionnaire response if older than 5 years",
"expression": "($this.is(FHIR.Patient) and $this.birthDate <= today() - 5 'years') or ($this.contained.where(Patient.birthDate <= today() - 5 'years').exists())"
}
}
],
"definitionUri": "blood_screening.json",
"type": "create"
},
{
"identifier": "2796fba9-1a0a-54a0-8bc4-31f654094614",
"prefix": 3,
"title": "Bednet Distribution",
"description": "Visit 100% of residential structures in the operational area and provide nets",
"code": "Bednet Distribution",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "RACD_bednet_distribution",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration event is submitted",
"expression": "questionnaire = 'Family_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and Family exists",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Register structure event submitted for a residential structure",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "bednet_distribution.json",
"type": "create"
},
{
"identifier": "14e7ba9b-76d0-504a-b098-effe6c146f95",
"prefix": 4,
"title": "Larval Dipping",
"description": "Perform a minimum of three larval dipping activities in the operational area",
"code": "Larval Dipping",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "Larval_Dipping",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is a larval breeding site",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').text = 'Larval Breeding Site'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to larval breeding sites in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Larval Breeding Site')"
}
}
],
"definitionUri": "larval_dipping_form.json",
"type": "create"
},
{
"identifier": "183a1a40-e517-5e0b-ade1-c4a540056706",
"prefix": 5,
"title": "Mosquito Collection",
"description": "Set a minimum of three mosquito collection traps and complete the mosquito collection process",
"code": "Mosquito Collection",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "Mosquito_Collection",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is a mosquito collection point",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').text = 'Mosquito Collection Point'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to mosquito collection point in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Mosquito Collection Point')"
}
}
],
"definitionUri": "mosquito_collection_form.json",
"type": "create"
},
{
"identifier": "302b00cc-7817-5663-b7a5-fb053fecc698",
"prefix": 6,
"title": "Behaviour Change Communication",
"description": "Conduct BCC activity",
"code": "BCC",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "BCC_Focus",
"subjectCodableConcept": {
"text": "Jurisdiction"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Jurisdiction type location",
"expression": "Location.physicalType.text = 'jdn'"
}
}
],
"definitionUri": "behaviour_change_communication.json",
"type": "create"
}
]
```
| 1.0 | Update Dynamic Plans Action templates - The current expressions for various templates are wrong. Update the plan templates used to generate Plans as per below
IRS
```
"action": [
{
"identifier": "b996adc3-574d-5bc0-903d-d00d38252a9f",
"prefix": 1,
"title": "Spray Structures",
"description": "Visit each structure in the operational area and attempt to spray",
"code": "IRS",
"timingPeriod": {
"start": "2020-08-17",
"end": "2020-08-24"
},
"reason": "Routine",
"goalId": "IRS",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is residential or type does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "spray_form.json",
"type": "create"
}
]
```
MDA
```
"action": [
{
"identifier": "79e9ad73-45a0-52b9-82e9-50d21c29bb78",
"prefix": 1,
"title": "Family Registration",
"description": "Register all families & family members in all residential structures enumerated (100%) within the operational area",
"code": "RACD Register Family",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "RACD_register_families",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and family does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists().not())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "family_register.json",
"type": "create"
},
{
"identifier": "69dd5115-a8fc-5b06-b3a1-75b7e303a7f9",
"prefix": 2,
"title": "Distribute Drugs",
"description": "Visit all residential structures (100%) and dispense prophylaxis to each registered person",
"code": "MDA Dispense",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "MDA_Dispense",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration or Family Member Registration event is submitted",
"expression": "questionnaire = 'Family_Registration' or questionnaire = 'Family_Member_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Person or person associated with questionaire response is older than 5 years and younger than 15 years",
"expression": "($this.is(FHIR.Patient) and $this.birthDate <= today() - 5 'years' and $this.birthDate > today() - 15 'years') or ($this.contained.where(Patient.birthDate <= today() - 5 'years' and $this.birthDate > today() - 15 'years').exists())"
}
}
],
"definitionUri": "mda_dispense.json",
"type": "create"
},
{
"identifier": "f4d7243b-4b6d-532c-89dc-6d593ee46988",
"prefix": 3,
"title": "MDA Adherence",
"description": "Visit all residential structures (100%) and confirm adherence of each registered person",
"code": "MDA Adherence",
"timingPeriod": {
"start": "2020-08-18",
"end": "2020-08-25"
},
"reason": "Routine",
"goalId": "MDA_Adherence",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a MDA Dispense event is submitted",
"expression": "questionnaire = 'mda_dispense'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "The person fully received the dispense activity",
"expression": "$this.item.where(linkId='business_status').answer.value = 'Fully Received'"
}
}
],
"definitionUri": "mda_adherence.json",
"type": "create"
}
]
```
FI
```
"action": [
{
"identifier": "5bc2b863-9127-5b5b-a7ee-7a52775576e7",
"prefix": 1,
"title": "Family Registration",
"description": "Register all families & family members in all residential structures enumerated (100%) within the operational area",
"code": "RACD Register Family",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "RACD_register_families",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and Family does not exist",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists().not())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to residential structures in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "family_register.json",
"type": "create"
},
{
"identifier": "28e7da08-aad7-5c4a-87ad-e955e54bb324",
"prefix": 2,
"title": "Blood Screening",
"description": "Visit all residential structures (100%) within a 1 km radius of a confirmed index case and test each registered person",
"code": "Blood Screening",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "RACD_Blood_Screening",
"subjectCodableConcept": {
"text": "Person"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration or Family Member Registration event is submitted",
"expression": "questionnaire = 'Family_Registration' or questionnaire = 'Family_Member_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Person is older than 5 years or person associated with questionnaire response if older than 5 years",
"expression": "($this.is(FHIR.Patient) and $this.birthDate <= today() - 5 'years') or ($this.contained.where(Patient.birthDate <= today() - 5 'years').exists())"
}
}
],
"definitionUri": "blood_screening.json",
"type": "create"
},
{
"identifier": "2796fba9-1a0a-54a0-8bc4-31f654094614",
"prefix": 3,
"title": "Bednet Distribution",
"description": "Visit 100% of residential structures in the operational area and provide nets",
"code": "Bednet Distribution",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "RACD_bednet_distribution",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Family Registration event is submitted",
"expression": "questionnaire = 'Family_Registration'"
}
}
],
"condition": [
{
"kind": "applicability",
"subjectCodableConcept": {
"text": "Family"
},
"expression": {
"description": "Structure is residential or type does not exist and Family exists",
"expression": "$this.is(FHIR.QuestionnaireResponse) or (($this.type.where(id='locationType').exists().not() or $this.type.where(id='locationType').text = 'Residential Structure') and $this.contained.exists())"
}
},
{
"kind": "applicability",
"expression": {
"description": "Register structure event submitted for a residential structure",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Residential Structure')"
}
}
],
"definitionUri": "bednet_distribution.json",
"type": "create"
},
{
"identifier": "14e7ba9b-76d0-504a-b098-effe6c146f95",
"prefix": 4,
"title": "Larval Dipping",
"description": "Perform a minimum of three larval dipping activities in the operational area",
"code": "Larval Dipping",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "Larval_Dipping",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is a larval breeding site",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').text = 'Larval Breeding Site'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to larval breeding sites in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Larval Breeding Site')"
}
}
],
"definitionUri": "larval_dipping_form.json",
"type": "create"
},
{
"identifier": "183a1a40-e517-5e0b-ade1-c4a540056706",
"prefix": 5,
"title": "Mosquito Collection",
"description": "Set a minimum of three mosquito collection traps and complete the mosquito collection process",
"code": "Mosquito Collection",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Investigation",
"goalId": "Mosquito_Collection",
"subjectCodableConcept": {
"text": "Location"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
},
{
"type": "named-event",
"name": "event-submission",
"expression": {
"description": "Trigger when a Register_Structure event is submitted",
"expression": "questionnaire = 'Register_Structure'"
}
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Structure is a mosquito collection point",
"expression": "$this.is(FHIR.QuestionnaireResponse) or $this.type.where(id='locationType').text = 'Mosquito Collection Point'"
}
},
{
"kind": "applicability",
"expression": {
"description": "Apply to mosquito collection point in Register_Structure questionnaires",
"expression": "$this.is(FHIR.Location) or (questionnaire = 'Register_Structure' and $this.item.where(linkId='structureType').answer.value ='Mosquito Collection Point')"
}
}
],
"definitionUri": "mosquito_collection_form.json",
"type": "create"
},
{
"identifier": "302b00cc-7817-5663-b7a5-fb053fecc698",
"prefix": 6,
"title": "Behaviour Change Communication",
"description": "Conduct BCC activity",
"code": "BCC",
"timingPeriod": {
"start": "2020-08-13",
"end": "2020-08-20"
},
"reason": "Routine",
"goalId": "BCC_Focus",
"subjectCodableConcept": {
"text": "Jurisdiction"
},
"trigger": [
{
"type": "named-event",
"name": "plan-activation"
}
],
"condition": [
{
"kind": "applicability",
"expression": {
"description": "Jurisdiction type location",
"expression": "Location.physicalType.text = 'jdn'"
}
}
],
"definitionUri": "behaviour_change_communication.json",
"type": "create"
}
]
```
| priority | update dynamic plans action templates the current expressions for various templates are wrong update the plan templates used to generate plans as per below irs action identifier prefix title spray structures description visit each structure in the operational area and attempt to spray code irs timingperiod start end reason routine goalid irs subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a register structure event is submitted expression questionnaire register structure condition kind applicability expression description structure is residential or type does not exist expression this is fhir questionnaireresponse or this type where id locationtype exists not or this type where id locationtype text residential structure kind applicability expression description apply to residential structures in register structure questionnaires expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value residential structure definitionuri spray form json type create mda action identifier prefix title family registration description register all families amp family members in all residential structures enumerated within the operational area code racd register family timingperiod start end reason routine goalid racd register families subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a register structure event is submitted expression questionnaire register structure condition kind applicability subjectcodableconcept text family expression description structure is residential or type does not exist and family does not exist expression this is fhir questionnaireresponse or this type where id locationtype exists not or this type where id locationtype text residential structure and this contained exists not kind applicability expression description apply to residential structures in register structure questionnaires expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value residential structure definitionuri family register json type create identifier prefix title distribute drugs description visit all residential structures and dispense prophylaxis to each registered person code mda dispense timingperiod start end reason routine goalid mda dispense subjectcodableconcept text person trigger type named event name plan activation type named event name event submission expression description trigger when a family registration or family member registration event is submitted expression questionnaire family registration or questionnaire family member registration condition kind applicability expression description person or person associated with questionaire response is older than years and younger than years expression this is fhir patient and this birthdate today years or this contained where patient birthdate today years exists definitionuri mda dispense json type create identifier prefix title mda adherence description visit all residential structures and confirm adherence of each registered person code mda adherence timingperiod start end reason routine goalid mda adherence subjectcodableconcept text person trigger type named event name event submission expression description trigger when a mda dispense event is submitted expression questionnaire mda dispense condition kind applicability expression description the person fully received the dispense activity expression this item where linkid business status answer value fully received definitionuri mda adherence json type create fi action identifier prefix title family registration description register all families amp family members in all residential structures enumerated within the operational area code racd register family timingperiod start end reason routine goalid racd register families subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a register structure event is submitted expression questionnaire register structure condition kind applicability subjectcodableconcept text family expression description structure is residential or type does not exist and family does not exist expression this is fhir questionnaireresponse or this type where id locationtype exists not or this type where id locationtype text residential structure and this contained exists not kind applicability expression description apply to residential structures in register structure questionnaires expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value residential structure definitionuri family register json type create identifier prefix title blood screening description visit all residential structures within a km radius of a confirmed index case and test each registered person code blood screening timingperiod start end reason investigation goalid racd blood screening subjectcodableconcept text person trigger type named event name plan activation type named event name event submission expression description trigger when a family registration or family member registration event is submitted expression questionnaire family registration or questionnaire family member registration condition kind applicability expression description person is older than years or person associated with questionnaire response if older than years expression this is fhir patient and this birthdate today years or this contained where patient birthdate today years exists definitionuri blood screening json type create identifier prefix title bednet distribution description visit of residential structures in the operational area and provide nets code bednet distribution timingperiod start end reason routine goalid racd bednet distribution subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a family registration event is submitted expression questionnaire family registration condition kind applicability subjectcodableconcept text family expression description structure is residential or type does not exist and family exists expression this is fhir questionnaireresponse or this type where id locationtype exists not or this type where id locationtype text residential structure and this contained exists kind applicability expression description register structure event submitted for a residential structure expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value residential structure definitionuri bednet distribution json type create identifier prefix title larval dipping description perform a minimum of three larval dipping activities in the operational area code larval dipping timingperiod start end reason investigation goalid larval dipping subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a register structure event is submitted expression questionnaire register structure condition kind applicability expression description structure is a larval breeding site expression this is fhir questionnaireresponse or this type where id locationtype text larval breeding site kind applicability expression description apply to larval breeding sites in register structure questionnaires expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value larval breeding site definitionuri larval dipping form json type create identifier prefix title mosquito collection description set a minimum of three mosquito collection traps and complete the mosquito collection process code mosquito collection timingperiod start end reason investigation goalid mosquito collection subjectcodableconcept text location trigger type named event name plan activation type named event name event submission expression description trigger when a register structure event is submitted expression questionnaire register structure condition kind applicability expression description structure is a mosquito collection point expression this is fhir questionnaireresponse or this type where id locationtype text mosquito collection point kind applicability expression description apply to mosquito collection point in register structure questionnaires expression this is fhir location or questionnaire register structure and this item where linkid structuretype answer value mosquito collection point definitionuri mosquito collection form json type create identifier prefix title behaviour change communication description conduct bcc activity code bcc timingperiod start end reason routine goalid bcc focus subjectcodableconcept text jurisdiction trigger type named event name plan activation condition kind applicability expression description jurisdiction type location expression location physicaltype text jdn definitionuri behaviour change communication json type create | 1 |
301,186 | 9,217,140,793 | IssuesEvent | 2019-03-11 10:01:19 | tantivy-search/tantivy | https://api.github.com/repos/tantivy-search/tantivy | closed | UnorderedTermOrdinal system bugs when the term hashmap gets resized. | bug high priority | (found by @jannickj)
**Describe the bug**
When indexing a large number of facets, one gets a panic upon the serialization of the segment.
The error message says tantivy could not find a term id associated with the given unorderedtermid.
**Which version of tantivy are you using?**
0.8.*
| 1.0 | UnorderedTermOrdinal system bugs when the term hashmap gets resized. - (found by @jannickj)
**Describe the bug**
When indexing a large number of facets, one gets a panic upon the serialization of the segment.
The error message says tantivy could not find a term id associated with the given unorderedtermid.
**Which version of tantivy are you using?**
0.8.*
| priority | unorderedtermordinal system bugs when the term hashmap gets resized found by jannickj describe the bug when indexing a large number of facets one gets a panic upon the serialization of the segment the error message says tantivy could not find a term id associated with the given unorderedtermid which version of tantivy are you using | 1 |
440,995 | 12,707,014,570 | IssuesEvent | 2020-06-23 08:13:04 | luna/luna | https://api.github.com/repos/luna/luna | closed | System Library Management | Category: External Category: Tooling Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement | ### Summary
Currently when installing Luna Packages, either as part of Luna Studio or via Luna Build in the future, there is no way for Luna to manage system dependencies that these packages might depend on. As a result, the `luna-package` library should provide API to automatically install system libraries locally when required.
Current work towards implementing this task can be found on the [package-deps](https://github.com/luna/luna/tree/package-deps) branch.
### Value
Currently, users of Luna's libraries find themselves needing to install significant numbers of system-dependencies using their system package manager in order to have the Luna libraries work. This isn't a brilliant state of affairs, and so adding this functionality would both:
- Allow Luna Studio to ensure that all system dependencies are installed for the bundled libraries.
- Allow future uses of `luna install ...` to have any system dependencies installed automatically.
### Specification
- [ ] Uses [Nix](https://github.com/NixOS/nix) to install and manage system libraries on *nix and MacOS (`Luna.Package.Dependency.System.Unix`).
- [ ] Has a design for a solution for system library management in the source. (`Luna.Package.Dependency.System.Windows`).
- [ ] Provides a _transparent_ API to users based on [hnix](https://github.com/haskell-nix/hnix) or calling out to the Nix executable.
- [ ] This API takes just a list of package names for the _system dependencies_.
- [x] The package configuration contains a list of system _direct_ dependencies.
- [ ] These system dependencies can be gathered from package configuration `.luna-package/config.yaml` (e.g. `installPackageDeps :: MonadIO m => Local.Config -> m Bool`). This machinery can later be used by Luna Build to install packages (`luna install ...`).
- [ ] Provides a method for bundling nix for use by Luna (for both Luna and Luna Studio).
- [ ] Nix can be installed without elevated privileges.
- [ ] The use of Nix is hidden from the users as much as possible.
- [ ] Provides libraries only in a sandbox.
- [ ] New libraries should be available without process restart.
- [ ] Should handle the transitive closure of system dependencies when given a list (auto via nix).
- [ ] Should handle the transitive closure of system dependencies when given a project.
### Acceptance Criteria & Test Cases
- [ ] Nix can be successfully installed, on-demand, without the need for elevated privileges.
- [ ] System libraries can be read from a package's configuration and installed.
- [ ] The API supports installing a list of provided packages.
- [ ] This functionality is exercised in the HSpec test suite.
- [ ] New libraries are available without restarting the luna process. | 1.0 | System Library Management - ### Summary
Currently when installing Luna Packages, either as part of Luna Studio or via Luna Build in the future, there is no way for Luna to manage system dependencies that these packages might depend on. As a result, the `luna-package` library should provide API to automatically install system libraries locally when required.
Current work towards implementing this task can be found on the [package-deps](https://github.com/luna/luna/tree/package-deps) branch.
### Value
Currently, users of Luna's libraries find themselves needing to install significant numbers of system-dependencies using their system package manager in order to have the Luna libraries work. This isn't a brilliant state of affairs, and so adding this functionality would both:
- Allow Luna Studio to ensure that all system dependencies are installed for the bundled libraries.
- Allow future uses of `luna install ...` to have any system dependencies installed automatically.
### Specification
- [ ] Uses [Nix](https://github.com/NixOS/nix) to install and manage system libraries on *nix and MacOS (`Luna.Package.Dependency.System.Unix`).
- [ ] Has a design for a solution for system library management in the source. (`Luna.Package.Dependency.System.Windows`).
- [ ] Provides a _transparent_ API to users based on [hnix](https://github.com/haskell-nix/hnix) or calling out to the Nix executable.
- [ ] This API takes just a list of package names for the _system dependencies_.
- [x] The package configuration contains a list of system _direct_ dependencies.
- [ ] These system dependencies can be gathered from package configuration `.luna-package/config.yaml` (e.g. `installPackageDeps :: MonadIO m => Local.Config -> m Bool`). This machinery can later be used by Luna Build to install packages (`luna install ...`).
- [ ] Provides a method for bundling nix for use by Luna (for both Luna and Luna Studio).
- [ ] Nix can be installed without elevated privileges.
- [ ] The use of Nix is hidden from the users as much as possible.
- [ ] Provides libraries only in a sandbox.
- [ ] New libraries should be available without process restart.
- [ ] Should handle the transitive closure of system dependencies when given a list (auto via nix).
- [ ] Should handle the transitive closure of system dependencies when given a project.
### Acceptance Criteria & Test Cases
- [ ] Nix can be successfully installed, on-demand, without the need for elevated privileges.
- [ ] System libraries can be read from a package's configuration and installed.
- [ ] The API supports installing a list of provided packages.
- [ ] This functionality is exercised in the HSpec test suite.
- [ ] New libraries are available without restarting the luna process. | priority | system library management summary currently when installing luna packages either as part of luna studio or via luna build in the future there is no way for luna to manage system dependencies that these packages might depend on as a result the luna package library should provide api to automatically install system libraries locally when required current work towards implementing this task can be found on the branch value currently users of luna s libraries find themselves needing to install significant numbers of system dependencies using their system package manager in order to have the luna libraries work this isn t a brilliant state of affairs and so adding this functionality would both allow luna studio to ensure that all system dependencies are installed for the bundled libraries allow future uses of luna install to have any system dependencies installed automatically specification uses to install and manage system libraries on nix and macos luna package dependency system unix has a design for a solution for system library management in the source luna package dependency system windows provides a transparent api to users based on or calling out to the nix executable this api takes just a list of package names for the system dependencies the package configuration contains a list of system direct dependencies these system dependencies can be gathered from package configuration luna package config yaml e g installpackagedeps monadio m local config m bool this machinery can later be used by luna build to install packages luna install provides a method for bundling nix for use by luna for both luna and luna studio nix can be installed without elevated privileges the use of nix is hidden from the users as much as possible provides libraries only in a sandbox new libraries should be available without process restart should handle the transitive closure of system dependencies when given a list auto via nix should handle the transitive closure of system dependencies when given a project acceptance criteria test cases nix can be successfully installed on demand without the need for elevated privileges system libraries can be read from a package s configuration and installed the api supports installing a list of provided packages this functionality is exercised in the hspec test suite new libraries are available without restarting the luna process | 1 |
353,160 | 10,549,501,397 | IssuesEvent | 2019-10-03 08:53:12 | SUSE/cap-terraform | https://api.github.com/repos/SUSE/cap-terraform | opened | deploy SCF on EKS using Terraform scripts | priority: high status: accepted type: enhancement | deploy SCF on EKS automatically after spinning up an EKS cluster (which is working already) with DNS and other infra components required.
| 1.0 | deploy SCF on EKS using Terraform scripts - deploy SCF on EKS automatically after spinning up an EKS cluster (which is working already) with DNS and other infra components required.
| priority | deploy scf on eks using terraform scripts deploy scf on eks automatically after spinning up an eks cluster which is working already with dns and other infra components required | 1 |
186,023 | 6,732,858,830 | IssuesEvent | 2017-10-18 13:05:59 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Module Builder editing module layout views dont work in SuiteCRM 7.9.4 | bug Fix Proposed High Priority Resolved: Next Release | <!--- Provide a general summary of the issue in the **Title** above -->
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
In Module Builder when click any module views (EditView, DetailView etc.) nothing happens.
Browser give these errors
Failed to load resource: the server responded with a status of 404 (Not Found)
/SuiteCRM/themes/SuiteP/css/studio.css.map
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
.../index.php?to_pdf=1&sugar_body_only=1&module=ModuleBuilder&MB=true&action=editLayout&view=editview&view_module=test&view_package=test
Dont know what is the problem but when copy old version 7.9.2 file from
SuiteCRM-7.9.2/modules/ModuleBuilder/parsers/views/AbstractMetaDataImplementation.php
and use it then it seems to work.
Well, this error still remains
Failed to load resource: the server responded with a status of 404 (Not Found)
/SuiteCRM/themes/SuiteP/css/studio.css.map
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.9.4
| 1.0 | Module Builder editing module layout views dont work in SuiteCRM 7.9.4 - <!--- Provide a general summary of the issue in the **Title** above -->
<!--- Before you open an issue, please check if a similar issue already exists or has been closed before. --->
#### Issue
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
In Module Builder when click any module views (EditView, DetailView etc.) nothing happens.
Browser give these errors
Failed to load resource: the server responded with a status of 404 (Not Found)
/SuiteCRM/themes/SuiteP/css/studio.css.map
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
.../index.php?to_pdf=1&sugar_body_only=1&module=ModuleBuilder&MB=true&action=editLayout&view=editview&view_module=test&view_package=test
Dont know what is the problem but when copy old version 7.9.2 file from
SuiteCRM-7.9.2/modules/ModuleBuilder/parsers/views/AbstractMetaDataImplementation.php
and use it then it seems to work.
Well, this error still remains
Failed to load resource: the server responded with a status of 404 (Not Found)
/SuiteCRM/themes/SuiteP/css/studio.css.map
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.9.4
| priority | module builder editing module layout views dont work in suitecrm issue in module builder when click any module views editview detailview etc nothing happens browser give these errors failed to load resource the server responded with a status of not found suitecrm themes suitep css studio css map failed to load resource the server responded with a status of internal server error index php to pdf sugar body only module modulebuilder mb true action editlayout view editview view module test view package test dont know what is the problem but when copy old version file from suitecrm modules modulebuilder parsers views abstractmetadataimplementation php and use it then it seems to work well this error still remains failed to load resource the server responded with a status of not found suitecrm themes suitep css studio css map your environment suitecrm version used | 1 |
686,828 | 23,505,956,622 | IssuesEvent | 2022-08-18 12:39:25 | kubermatic/dashboard | https://api.github.com/repos/kubermatic/dashboard | closed | Not able to fetch old metering reports | kind/bug priority/high | ### What happened?
I was trying to download reports created before 2.21 migration. It is not possible from the dashboard.
### Expected behavior
Available list of reports to download or remove.
### How to reproduce the issue?
Go to `https://<dashboard_url>/settings/metering`.
### How is your environment configured?
- KKP version: 2.21
### Additional information
Old reports are stored in the root directory of S3 bucket and we don't download them from there while traversing schedule configurations. Old reports are available to fetch under GET `/api/v1/admin/metering/reports`.
| 1.0 | Not able to fetch old metering reports - ### What happened?
I was trying to download reports created before 2.21 migration. It is not possible from the dashboard.
### Expected behavior
Available list of reports to download or remove.
### How to reproduce the issue?
Go to `https://<dashboard_url>/settings/metering`.
### How is your environment configured?
- KKP version: 2.21
### Additional information
Old reports are stored in the root directory of S3 bucket and we don't download them from there while traversing schedule configurations. Old reports are available to fetch under GET `/api/v1/admin/metering/reports`.
| priority | not able to fetch old metering reports what happened i was trying to download reports created before migration it is not possible from the dashboard expected behavior available list of reports to download or remove how to reproduce the issue go to how is your environment configured kkp version additional information old reports are stored in the root directory of bucket and we don t download them from there while traversing schedule configurations old reports are available to fetch under get api admin metering reports | 1 |
528,945 | 15,377,625,044 | IssuesEvent | 2021-03-02 17:18:16 | Conjurinc-workato-dev/ldap-sync | https://api.github.com/repos/Conjurinc-workato-dev/ldap-sync | closed | This is summary in Jira issue | ONYX-6642 Severity/High bugtype/Integration kind/bug priority/Default team/Core | ##description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##Found in version
10.5
##Workaround Complexity
NA
##Workaround Description
##Affects Version/s
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6642
##Bug description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##Found in version
10.5
##Workaround Complexity
NA
##Workaround Description
##Affects Version/s
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6642 | 1.0 | This is summary in Jira issue - ##description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##Found in version
10.5
##Workaround Complexity
NA
##Workaround Description
##Affects Version/s
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6642
##Bug description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##description
Steps to reproduce: this is bug description in JIRA
Current Results: this is bug description in JIRA
Expected Results: this is bug description in JIRA
Error Messages: this is bug description in JIRA
Logs: this is bug description in JIRA
Other Symptoms: this is bug description in JIRA
Tenant ID / Pod Number: this is bug description in JIRA
##Found in version
10.5
##Workaround Complexity
NA
##Workaround Description
##Affects Version/s
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6642 | priority | this is summary in jira issue description steps to reproduce this is bug description in jira current results this is bug description in jira expected results this is bug description in jira error messages this is bug description in jira logs this is bug description in jira other symptoms this is bug description in jira tenant id pod number this is bug description in jira found in version workaround complexity na workaround description affects version s link to jira bug bug description steps to reproduce this is bug description in jira current results this is bug description in jira expected results this is bug description in jira error messages this is bug description in jira logs this is bug description in jira other symptoms this is bug description in jira tenant id pod number this is bug description in jira description steps to reproduce this is bug description in jira current results this is bug description in jira expected results this is bug description in jira error messages this is bug description in jira logs this is bug description in jira other symptoms this is bug description in jira tenant id pod number this is bug description in jira found in version workaround complexity na workaround description affects version s link to jira bug | 1 |
714,225 | 24,554,901,873 | IssuesEvent | 2022-10-12 15:09:31 | eclipse/lsp4jakarta | https://api.github.com/repos/eclipse/lsp4jakarta | closed | Remove hover code | high priority 1 | Remove code relating to "jakarta/java/hover" since we are not delivering any hover items currently: https://github.com/eclipse/lsp4jakarta/blob/c906437fb3919eb20cf8a564a1dbb0401c22496c/jakarta.ls/src/main/java/org/eclipse/lsp4jakarta/api/JakartaLanguageClientAPI.java#L35-L38 | 1.0 | Remove hover code - Remove code relating to "jakarta/java/hover" since we are not delivering any hover items currently: https://github.com/eclipse/lsp4jakarta/blob/c906437fb3919eb20cf8a564a1dbb0401c22496c/jakarta.ls/src/main/java/org/eclipse/lsp4jakarta/api/JakartaLanguageClientAPI.java#L35-L38 | priority | remove hover code remove code relating to jakarta java hover since we are not delivering any hover items currently | 1 |
658,011 | 21,875,639,363 | IssuesEvent | 2022-05-19 09:52:35 | hpi-swa-teaching/Actions | https://api.github.com/repos/hpi-swa-teaching/Actions | closed | Help window/popup | enhancement High Priority 5 | As a beginner level user I want to be able to open a help window which explains how to configure shortcuts in order to understand all functionalities and start using the tool efficiently.
Time: 5
Acceptance:
- seperate window or tooltip
- max 1 click away
- explain all fundamental functions
- understandable for new users | 1.0 | Help window/popup - As a beginner level user I want to be able to open a help window which explains how to configure shortcuts in order to understand all functionalities and start using the tool efficiently.
Time: 5
Acceptance:
- seperate window or tooltip
- max 1 click away
- explain all fundamental functions
- understandable for new users | priority | help window popup as a beginner level user i want to be able to open a help window which explains how to configure shortcuts in order to understand all functionalities and start using the tool efficiently time acceptance seperate window or tooltip max click away explain all fundamental functions understandable for new users | 1 |
575,889 | 17,065,082,131 | IssuesEvent | 2021-07-07 06:03:36 | BadWolf1023/MKW-Table-Bot | https://api.github.com/repos/BadWolf1023/MKW-Table-Bot | closed | HIGH PRIORITY: No errors detected in room that had errors | HIGH PRIORITY bug enhancement help wanted | 
No errors were detected in this room. TableBot does not currently look for this type of error. However, TableBot **should** detect this error and notify the user that an error occurred for Race 8.
Notable observations about this room:
- All players had delta values.
- All delta values were extremely high.
- All players had the exact same race times as the previous race.
The HTML for this room has been saved and can be downloaded here:
[Room HTML.zip](https://github.com/BadWolf1023/MKW-Table-Bot/files/6757569/Room.HTML.zip)
TableBot should then implement the following 2 error codes and checks:
- Check if a given race's players and times are a **subset** of any other race in the room's players and times. If a race's players and times are a subset of another race's players and times, TableBot should notify the user that the race most likely has an error. (Think about it, even if ONE player had the same race time as a previous race, that is highly unlikely. It is practically impossible that even two players have the same exact race times on two different races in a given room.) A subset is necessary to catch races that have same times but had disconnections. There have been no reports of this happening, but it is prudent nonetheless since such a case would be an error, but would slip through a set equivalence check.
- Check if any player's deltas are outside of a certain range. Generally, valid deltas are between -1.0 and 6.0. If any player in the room has a delta outside of this range, TableBot should notify the user that that race may have an error.
Relevant file: https://github.com/BadWolf1023/MKW-Table-Bot/blob/main/ErrorChecker.py | 1.0 | HIGH PRIORITY: No errors detected in room that had errors - 
No errors were detected in this room. TableBot does not currently look for this type of error. However, TableBot **should** detect this error and notify the user that an error occurred for Race 8.
Notable observations about this room:
- All players had delta values.
- All delta values were extremely high.
- All players had the exact same race times as the previous race.
The HTML for this room has been saved and can be downloaded here:
[Room HTML.zip](https://github.com/BadWolf1023/MKW-Table-Bot/files/6757569/Room.HTML.zip)
TableBot should then implement the following 2 error codes and checks:
- Check if a given race's players and times are a **subset** of any other race in the room's players and times. If a race's players and times are a subset of another race's players and times, TableBot should notify the user that the race most likely has an error. (Think about it, even if ONE player had the same race time as a previous race, that is highly unlikely. It is practically impossible that even two players have the same exact race times on two different races in a given room.) A subset is necessary to catch races that have same times but had disconnections. There have been no reports of this happening, but it is prudent nonetheless since such a case would be an error, but would slip through a set equivalence check.
- Check if any player's deltas are outside of a certain range. Generally, valid deltas are between -1.0 and 6.0. If any player in the room has a delta outside of this range, TableBot should notify the user that that race may have an error.
Relevant file: https://github.com/BadWolf1023/MKW-Table-Bot/blob/main/ErrorChecker.py | priority | high priority no errors detected in room that had errors no errors were detected in this room tablebot does not currently look for this type of error however tablebot should detect this error and notify the user that an error occurred for race notable observations about this room all players had delta values all delta values were extremely high all players had the exact same race times as the previous race the html for this room has been saved and can be downloaded here tablebot should then implement the following error codes and checks check if a given race s players and times are a subset of any other race in the room s players and times if a race s players and times are a subset of another race s players and times tablebot should notify the user that the race most likely has an error think about it even if one player had the same race time as a previous race that is highly unlikely it is practically impossible that even two players have the same exact race times on two different races in a given room a subset is necessary to catch races that have same times but had disconnections there have been no reports of this happening but it is prudent nonetheless since such a case would be an error but would slip through a set equivalence check check if any player s deltas are outside of a certain range generally valid deltas are between and if any player in the room has a delta outside of this range tablebot should notify the user that that race may have an error relevant file | 1 |
346,796 | 10,419,923,191 | IssuesEvent | 2019-09-15 20:08:16 | localstack/localstack | https://api.github.com/repos/localstack/localstack | closed | AWS SQS Lambda event source does not work because the Lambda ARN and the event source ARN are different (when using Serverless and Cloud Formation) | bug priority-high | The ARN between the SQS ARN and the event source ARN are different. This causes the Lambda to not get called when for SQS events.
## Steps to reproduce
1. Use my example project: https://github.com/ashlux/serverless-localstack-sqs-lambda-event.git
1. Using my fix to Moto so the Serverless deploy will work: https://github.com/spulec/moto/issues/2045#issuecomment-489759139. (This fixes https://github.com/localstack/localstack/issues/1288).
1. Run the example project: `npm i && serverless deploy --stage local`
1. Send a message to SQS: `awslocal sqs send-message --queue-url http://localhost:4576/queue/my-sqs-dev --message-body rawr`
## Expected result
The SQS message causes the Lambda to execute and output `SQS Event` along with the event. The SQS queue ARN should match the event source ARN from the event source mapping.
## What actually happens
The Lambda is not called after the SQS message is sent.
The SQS queue ARN does not match the event source ARN from the event source mapping so the SQS event does not cause the Lambda to be executed.
The queue ARN has the region `elasticmq` and the region `000000000000` whereas the event source ARN has the region `us-east-1` and the region `123456789012`.
```
╰─➤ awslocal sqs get-queue-attributes --queue-url http://localhost:4576/queue/my-sqs-dev --attribute-name QueueArn
{
"Attributes": {
"QueueArn": "arn:aws:sqs:elasticmq:000000000000:my-sqs-dev"
}
}
╰─➤ awslocal lambda list-event-source-mappings
{
"EventSourceMappings": [
{
"UUID": "f31b9c96-6601-4e2e-aa59-c950a75acd24",
"BatchSize": 100,
"EventSourceArn": "arn:aws:sqs:us-east-1:123456789012:my-sqs-dev",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:my-lambda-dev",
"LastModified": 1557254699.0,
"LastProcessingResult": "OK",
"State": "Enabled",
"StateTransitionReason": "User action"
}
]
}
```
I've tracked down the problem in two places, one in Moto and the other in LocalStack.
1. https://github.com/spulec/moto/blob/master/moto/sqs/models.py#L188
1. https://github.com/localstack/localstack/blob/b51c2c4414a1a412cf57533a2b7c1b984647d2b5/localstack/utils/aws/aws_stack.py#L353
| 1.0 | AWS SQS Lambda event source does not work because the Lambda ARN and the event source ARN are different (when using Serverless and Cloud Formation) - The ARN between the SQS ARN and the event source ARN are different. This causes the Lambda to not get called when for SQS events.
## Steps to reproduce
1. Use my example project: https://github.com/ashlux/serverless-localstack-sqs-lambda-event.git
1. Using my fix to Moto so the Serverless deploy will work: https://github.com/spulec/moto/issues/2045#issuecomment-489759139. (This fixes https://github.com/localstack/localstack/issues/1288).
1. Run the example project: `npm i && serverless deploy --stage local`
1. Send a message to SQS: `awslocal sqs send-message --queue-url http://localhost:4576/queue/my-sqs-dev --message-body rawr`
## Expected result
The SQS message causes the Lambda to execute and output `SQS Event` along with the event. The SQS queue ARN should match the event source ARN from the event source mapping.
## What actually happens
The Lambda is not called after the SQS message is sent.
The SQS queue ARN does not match the event source ARN from the event source mapping so the SQS event does not cause the Lambda to be executed.
The queue ARN has the region `elasticmq` and the region `000000000000` whereas the event source ARN has the region `us-east-1` and the region `123456789012`.
```
╰─➤ awslocal sqs get-queue-attributes --queue-url http://localhost:4576/queue/my-sqs-dev --attribute-name QueueArn
{
"Attributes": {
"QueueArn": "arn:aws:sqs:elasticmq:000000000000:my-sqs-dev"
}
}
╰─➤ awslocal lambda list-event-source-mappings
{
"EventSourceMappings": [
{
"UUID": "f31b9c96-6601-4e2e-aa59-c950a75acd24",
"BatchSize": 100,
"EventSourceArn": "arn:aws:sqs:us-east-1:123456789012:my-sqs-dev",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:my-lambda-dev",
"LastModified": 1557254699.0,
"LastProcessingResult": "OK",
"State": "Enabled",
"StateTransitionReason": "User action"
}
]
}
```
I've tracked down the problem in two places, one in Moto and the other in LocalStack.
1. https://github.com/spulec/moto/blob/master/moto/sqs/models.py#L188
1. https://github.com/localstack/localstack/blob/b51c2c4414a1a412cf57533a2b7c1b984647d2b5/localstack/utils/aws/aws_stack.py#L353
| priority | aws sqs lambda event source does not work because the lambda arn and the event source arn are different when using serverless and cloud formation the arn between the sqs arn and the event source arn are different this causes the lambda to not get called when for sqs events steps to reproduce use my example project using my fix to moto so the serverless deploy will work this fixes run the example project npm i serverless deploy stage local send a message to sqs awslocal sqs send message queue url message body rawr expected result the sqs message causes the lambda to execute and output sqs event along with the event the sqs queue arn should match the event source arn from the event source mapping what actually happens the lambda is not called after the sqs message is sent the sqs queue arn does not match the event source arn from the event source mapping so the sqs event does not cause the lambda to be executed the queue arn has the region elasticmq and the region whereas the event source arn has the region us east and the region ╰─➤ awslocal sqs get queue attributes queue url attribute name queuearn attributes queuearn arn aws sqs elasticmq my sqs dev ╰─➤ awslocal lambda list event source mappings eventsourcemappings uuid batchsize eventsourcearn arn aws sqs us east my sqs dev functionarn arn aws lambda us east function my lambda dev lastmodified lastprocessingresult ok state enabled statetransitionreason user action i ve tracked down the problem in two places one in moto and the other in localstack | 1 |
83,909 | 3,644,802,385 | IssuesEvent | 2016-02-15 11:39:04 | outbrain/Leonardo | https://api.github.com/repos/outbrain/Leonardo | closed | Add remove "option" | enhancement High priority | Perhaps make the options drop down a "select2" like drop down and add an x for removing the option. | 1.0 | Add remove "option" - Perhaps make the options drop down a "select2" like drop down and add an x for removing the option. | priority | add remove option perhaps make the options drop down a like drop down and add an x for removing the option | 1 |
165,355 | 6,275,025,477 | IssuesEvent | 2017-07-18 05:04:50 | Burke-Lauenroth-Lab/SOILWAT2 | https://api.github.com/repos/Burke-Lauenroth-Lab/SOILWAT2 | closed | Add support for non-RCP scenarios | enhancement high priority | Currently, there is a file carbon.in that holds the PPM for a set amount of years, grouped by RCP number. When generating the multipliers, year 0 is looked for, and then the integer in the "value" column is grabbed, which represents the RCP number. This should be changed to expect a string that represents the scenario. For instance, instead of the integer 85, the string "RCP85" should be used.
This is a high priority because this change must concurrently be implemented across rSFSW2, rSOILWAT2, and SOILWAT2. | 1.0 | Add support for non-RCP scenarios - Currently, there is a file carbon.in that holds the PPM for a set amount of years, grouped by RCP number. When generating the multipliers, year 0 is looked for, and then the integer in the "value" column is grabbed, which represents the RCP number. This should be changed to expect a string that represents the scenario. For instance, instead of the integer 85, the string "RCP85" should be used.
This is a high priority because this change must concurrently be implemented across rSFSW2, rSOILWAT2, and SOILWAT2. | priority | add support for non rcp scenarios currently there is a file carbon in that holds the ppm for a set amount of years grouped by rcp number when generating the multipliers year is looked for and then the integer in the value column is grabbed which represents the rcp number this should be changed to expect a string that represents the scenario for instance instead of the integer the string should be used this is a high priority because this change must concurrently be implemented across and | 1 |
606,892 | 18,769,958,870 | IssuesEvent | 2021-11-06 16:57:55 | Dicey-Tech/frontend-app-classroom | https://api.github.com/repos/Dicey-Tech/frontend-app-classroom | closed | Create classroom as a pop up | type:enhancement priority:high | <!--- Provide a general summary of the issue in the Title above -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Users have to navigate to <MFE_HOST>/classroom/create to create a new classroom. They can do it from the manage page. The navigation is unnecessary and if the user wants to cancel the process they then have to be redirected back to the manage page.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
The "New Classroom" button should trigger a pop up where the user can finish the classroom creation process or cancel the process which closes the pop up window. | 1.0 | Create classroom as a pop up - <!--- Provide a general summary of the issue in the Title above -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Users have to navigate to <MFE_HOST>/classroom/create to create a new classroom. They can do it from the manage page. The navigation is unnecessary and if the user wants to cancel the process they then have to be redirected back to the manage page.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
The "New Classroom" button should trigger a pop up where the user can finish the classroom creation process or cancel the process which closes the pop up window. | priority | create classroom as a pop up current behavior users have to navigate to classroom create to create a new classroom they can do it from the manage page the navigation is unnecessary and if the user wants to cancel the process they then have to be redirected back to the manage page possible solution the new classroom button should trigger a pop up where the user can finish the classroom creation process or cancel the process which closes the pop up window | 1 |
354,658 | 10,571,004,324 | IssuesEvent | 2019-10-07 05:29:51 | HackGT/bolt | https://api.github.com/repos/HackGT/bolt | opened | Show user's Slack username on hardware cards | component / hardware desk good first issue priority / high type / enhancement | In the absence of automated notifcations, show a user's Slack username in hardware desk so it's easy to contact them | 1.0 | Show user's Slack username on hardware cards - In the absence of automated notifcations, show a user's Slack username in hardware desk so it's easy to contact them | priority | show user s slack username on hardware cards in the absence of automated notifcations show a user s slack username in hardware desk so it s easy to contact them | 1 |
316,092 | 9,636,289,820 | IssuesEvent | 2019-05-16 05:18:53 | medic/medic | https://api.github.com/repos/medic/medic | opened | The accept_patient_reports throws an error | Priority: 1 - High Sentinel Type: Bug | **Describe the bug**
When searching for a `validRegistration`, `accept_patient_reports` assumes that if the registration has a `scheduled_tasks` property, it must also be an array with at least one element.
If the array is empty, accessing the 1st element of the array throws an error:
https://github.com/medic/medic/blob/master/shared-libs/transitions/src/transitions/accept_patient_reports.js#L134
```
if (registration.scheduled_tasks) {
var scheduledTasks = _.sortBy(registration.scheduled_tasks, 'due');
if (visitReportedDate < moment(scheduledTasks[0].due)) {
......
```
**To Reproduce**
Steps to reproduce the behavior:
1. Install 3.4 or 3.5
2. Configure accept_patient_reports to run on your desired form (`THE_FORM`)
3. Add a registration for a patient that has a `scheduled_tasks` equal to an empty array
4. Submit `THE_FORM` for the same patient
**Expected behavior**
`accept_patient_reports` should run successfully on your `THE_FORM` report
**What actually happens**
`accept_patient_reports` does not run successfully on your report and depending on your configuration, an error might be added to the doc:

**Logs**
```
2019-05-14 14:20:59 ERROR: transition accept_patient_reports errored on doc 7412174f-407f-4aec-8fa1-6a246f2b437a seq 142909-g1AAAAJ7eJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMiQ5L8____szKYkxgYXLbkAsXYk8yTU41TTLDpwWNSkgKQTLKHG-YaCDYs2djY1Cw1jVTDHECGxSNcFgI2zDTVKDXZ0JJUwxJAhtXDDfN8BTbM0MTY3MDImETD8liAJEMDkAKaNx9ioDs3xKsWQM-S7DqIgQsgBu6HevcL2MDEFANjExPyXHgAYuB9aGRMARtokGpqnkRyzEIMfAAxEBYh6ZAwtDSwTDU3x6Y1CwAGVKTE: {"message":"Cannot read property 'due' of undefined","stack":"TypeError: Cannot read property 'due' of undefined\n at findValidRegistration (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:141:58)\n at addReportUUIDToRegistration (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:183:55)\n at module.exports.silenceRegistrations.err (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:250:9)\n at /home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:205:7\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:473:16\n at iteratorCallback (/home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:1064:13)\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:969:16\n at Object._silenceReminders (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:99:12)\n at /home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:202:22\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:3110:16"}
```
**Environment**
- Instance: Live 3.4 instance (https://github.com/medic/medic-projects/issues/6298), localhost
- Client platform: Linux
- App: Sentinel
- Version: 3.4, 3.5
| 1.0 | The accept_patient_reports throws an error - **Describe the bug**
When searching for a `validRegistration`, `accept_patient_reports` assumes that if the registration has a `scheduled_tasks` property, it must also be an array with at least one element.
If the array is empty, accessing the 1st element of the array throws an error:
https://github.com/medic/medic/blob/master/shared-libs/transitions/src/transitions/accept_patient_reports.js#L134
```
if (registration.scheduled_tasks) {
var scheduledTasks = _.sortBy(registration.scheduled_tasks, 'due');
if (visitReportedDate < moment(scheduledTasks[0].due)) {
......
```
**To Reproduce**
Steps to reproduce the behavior:
1. Install 3.4 or 3.5
2. Configure accept_patient_reports to run on your desired form (`THE_FORM`)
3. Add a registration for a patient that has a `scheduled_tasks` equal to an empty array
4. Submit `THE_FORM` for the same patient
**Expected behavior**
`accept_patient_reports` should run successfully on your `THE_FORM` report
**What actually happens**
`accept_patient_reports` does not run successfully on your report and depending on your configuration, an error might be added to the doc:

**Logs**
```
2019-05-14 14:20:59 ERROR: transition accept_patient_reports errored on doc 7412174f-407f-4aec-8fa1-6a246f2b437a seq 142909-g1AAAAJ7eJzLYWBg4MhgTmEQTM4vTc5ISXIwNDLXMwBCwxygFFMiQ5L8____szKYkxgYXLbkAsXYk8yTU41TTLDpwWNSkgKQTLKHG-YaCDYs2djY1Cw1jVTDHECGxSNcFgI2zDTVKDXZ0JJUwxJAhtXDDfN8BTbM0MTY3MDImETD8liAJEMDkAKaNx9ioDs3xKsWQM-S7DqIgQsgBu6HevcL2MDEFANjExPyXHgAYuB9aGRMARtokGpqnkRyzEIMfAAxEBYh6ZAwtDSwTDU3x6Y1CwAGVKTE: {"message":"Cannot read property 'due' of undefined","stack":"TypeError: Cannot read property 'due' of undefined\n at findValidRegistration (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:141:58)\n at addReportUUIDToRegistration (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:183:55)\n at module.exports.silenceRegistrations.err (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:250:9)\n at /home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:205:7\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:473:16\n at iteratorCallback (/home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:1064:13)\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:969:16\n at Object._silenceReminders (/home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:99:12)\n at /home/diana/projects/medic-340/sentinel/src/transitions/accept_patient_reports.js:202:22\n at /home/diana/projects/medic-340/sentinel/node_modules/async/dist/async.js:3110:16"}
```
**Environment**
- Instance: Live 3.4 instance (https://github.com/medic/medic-projects/issues/6298), localhost
- Client platform: Linux
- App: Sentinel
- Version: 3.4, 3.5
| priority | the accept patient reports throws an error describe the bug when searching for a validregistration accept patient reports assumes that if the registration has a scheduled tasks property it must also be an array with at least one element if the array is empty accessing the element of the array throws an error if registration scheduled tasks var scheduledtasks sortby registration scheduled tasks due if visitreporteddate moment scheduledtasks due to reproduce steps to reproduce the behavior install or configure accept patient reports to run on your desired form the form add a registration for a patient that has a scheduled tasks equal to an empty array submit the form for the same patient expected behavior accept patient reports should run successfully on your the form report what actually happens accept patient reports does not run successfully on your report and depending on your configuration an error might be added to the doc logs error transition accept patient reports errored on doc seq message cannot read property due of undefined stack typeerror cannot read property due of undefined n at findvalidregistration home diana projects medic sentinel src transitions accept patient reports js n at addreportuuidtoregistration home diana projects medic sentinel src transitions accept patient reports js n at module exports silenceregistrations err home diana projects medic sentinel src transitions accept patient reports js n at home diana projects medic sentinel src transitions accept patient reports js n at home diana projects medic sentinel node modules async dist async js n at iteratorcallback home diana projects medic sentinel node modules async dist async js n at home diana projects medic sentinel node modules async dist async js n at object silencereminders home diana projects medic sentinel src transitions accept patient reports js n at home diana projects medic sentinel src transitions accept patient reports js n at home diana projects medic sentinel node modules async dist async js environment instance live instance localhost client platform linux app sentinel version | 1 |
301,471 | 9,220,776,192 | IssuesEvent | 2019-03-11 18:18:05 | E3SM-Project/ParallelIO | https://api.github.com/repos/E3SM-Project/ParallelIO | opened | Change ADIOS convertion tool name | High Priority | Changing the name of the ADIOS conversion tool from "adios2pio-nm" to "adios2pio-nm.exe" (add exe as suffix) would help CIME scripts to detect/process the conversion tool easily. The convention in E3SM/CIME is to name executables with an "exe" suffix (e.g. e3sm.exe). | 1.0 | Change ADIOS convertion tool name - Changing the name of the ADIOS conversion tool from "adios2pio-nm" to "adios2pio-nm.exe" (add exe as suffix) would help CIME scripts to detect/process the conversion tool easily. The convention in E3SM/CIME is to name executables with an "exe" suffix (e.g. e3sm.exe). | priority | change adios convertion tool name changing the name of the adios conversion tool from nm to nm exe add exe as suffix would help cime scripts to detect process the conversion tool easily the convention in cime is to name executables with an exe suffix e g exe | 1 |
153,454 | 5,892,567,584 | IssuesEvent | 2017-05-17 19:47:45 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Harmonize advanced plotting options | Component: GUI Priority: High | ### Current behavior
We have a few plotting options that are "advanced"
1. waterfall plots - option on plot spectrum dialog
2. tiled plots - option on plot spectrum dialog
3. Plot Surface from group - option on workspace group context menu if 3 or more members
4. Plot Contour from group - option on workspace group context menu if 3 or more members

As you can see the approaches are a bit varied as to how you get to them (and the last two can be hard to find),but there is a lot of similarity in the dialog box they create.


### Actual behavior
We want to harmonize all of this without disrupting the well known and used plot Spectrum (+- errors) usage.
We want to replace the context menu options with
1. Plot Spectrum ...
1. Plot Spectrum with errors ...
1. Plot Special ...
1. Color Fill plot (leave as before)
The two plot spectrum options had special behaviour if the workspace contained only one spectrum in that it would not show the dialog box and just plot immediately. We want to retain that.
We want to harmonize the dialog box that the other options use, and plot special should be able to be used for waterfall and tiled plots as well as the surfaces and contours, and there is no reason it cannot also be used for normal plots. Here is a suggested mockup.

The same dialog could be used for Plot Spectrum, just with the group box hidden and the form shrunk appropriately.
You might want to have a second level of context menus under plot special, they should just fill the plot type out in the dialog box for you.
All of the underlying plotting for this should already be in place. The only new aspect is allowing waterfall plots to be labelled using a log value or custom labels. The default label approach or [workspace]-sp[wpectrum no] should be maintained as an option.
### Platforms affected
All | 1.0 | Harmonize advanced plotting options - ### Current behavior
We have a few plotting options that are "advanced"
1. waterfall plots - option on plot spectrum dialog
2. tiled plots - option on plot spectrum dialog
3. Plot Surface from group - option on workspace group context menu if 3 or more members
4. Plot Contour from group - option on workspace group context menu if 3 or more members

As you can see the approaches are a bit varied as to how you get to them (and the last two can be hard to find),but there is a lot of similarity in the dialog box they create.


### Actual behavior
We want to harmonize all of this without disrupting the well known and used plot Spectrum (+- errors) usage.
We want to replace the context menu options with
1. Plot Spectrum ...
1. Plot Spectrum with errors ...
1. Plot Special ...
1. Color Fill plot (leave as before)
The two plot spectrum options had special behaviour if the workspace contained only one spectrum in that it would not show the dialog box and just plot immediately. We want to retain that.
We want to harmonize the dialog box that the other options use, and plot special should be able to be used for waterfall and tiled plots as well as the surfaces and contours, and there is no reason it cannot also be used for normal plots. Here is a suggested mockup.

The same dialog could be used for Plot Spectrum, just with the group box hidden and the form shrunk appropriately.
You might want to have a second level of context menus under plot special, they should just fill the plot type out in the dialog box for you.
All of the underlying plotting for this should already be in place. The only new aspect is allowing waterfall plots to be labelled using a log value or custom labels. The default label approach or [workspace]-sp[wpectrum no] should be maintained as an option.
### Platforms affected
All | priority | harmonize advanced plotting options current behavior we have a few plotting options that are advanced waterfall plots option on plot spectrum dialog tiled plots option on plot spectrum dialog plot surface from group option on workspace group context menu if or more members plot contour from group option on workspace group context menu if or more members as you can see the approaches are a bit varied as to how you get to them and the last two can be hard to find but there is a lot of similarity in the dialog box they create actual behavior we want to harmonize all of this without disrupting the well known and used plot spectrum errors usage we want to replace the context menu options with plot spectrum plot spectrum with errors plot special color fill plot leave as before the two plot spectrum options had special behaviour if the workspace contained only one spectrum in that it would not show the dialog box and just plot immediately we want to retain that we want to harmonize the dialog box that the other options use and plot special should be able to be used for waterfall and tiled plots as well as the surfaces and contours and there is no reason it cannot also be used for normal plots here is a suggested mockup the same dialog could be used for plot spectrum just with the group box hidden and the form shrunk appropriately you might want to have a second level of context menus under plot special they should just fill the plot type out in the dialog box for you all of the underlying plotting for this should already be in place the only new aspect is allowing waterfall plots to be labelled using a log value or custom labels the default label approach or sp should be maintained as an option platforms affected all | 1 |
266,626 | 8,372,836,243 | IssuesEvent | 2018-10-05 08:28:50 | GreenDelta/Sophena | https://api.github.com/repos/GreenDelta/Sophena | closed | 51: Wirtschaftlichkeit: Anschlusskosten in den Projekt-Kosteneinstellungen werden nicht berücksichtigt | bug high priority hours_4 | Sie müssten jedoch bei der Berechnung der Wirtschaftlichkeit bei den kapitalgebundenen Kosten (Zeile 2, mit und ohne Förderung) vor deren Berechnung von den zugrundegelegten Investitionskosten abgezogen werden. Die angezeigten Investitionskosten bleiben unverändert.
diese werden als Summe für alle Abnehmer angeben | 1.0 | 51: Wirtschaftlichkeit: Anschlusskosten in den Projekt-Kosteneinstellungen werden nicht berücksichtigt - Sie müssten jedoch bei der Berechnung der Wirtschaftlichkeit bei den kapitalgebundenen Kosten (Zeile 2, mit und ohne Förderung) vor deren Berechnung von den zugrundegelegten Investitionskosten abgezogen werden. Die angezeigten Investitionskosten bleiben unverändert.
diese werden als Summe für alle Abnehmer angeben | priority | wirtschaftlichkeit anschlusskosten in den projekt kosteneinstellungen werden nicht berücksichtigt sie müssten jedoch bei der berechnung der wirtschaftlichkeit bei den kapitalgebundenen kosten zeile mit und ohne förderung vor deren berechnung von den zugrundegelegten investitionskosten abgezogen werden die angezeigten investitionskosten bleiben unverändert diese werden als summe für alle abnehmer angeben | 1 |
559,265 | 16,554,126,041 | IssuesEvent | 2021-05-28 12:04:22 | sopra-fs21-group-10/td-server | https://api.github.com/repos/sopra-fs21-group-10/td-server | closed | Add a wider array of minions and towers to the game | high priority task | - for every minion and tower in the front-end there should be a representation with the cost in the backend
Estimated time: 1h | 1.0 | Add a wider array of minions and towers to the game - - for every minion and tower in the front-end there should be a representation with the cost in the backend
Estimated time: 1h | priority | add a wider array of minions and towers to the game for every minion and tower in the front end there should be a representation with the cost in the backend estimated time | 1 |
594,573 | 18,048,495,884 | IssuesEvent | 2021-09-19 10:14:04 | ita-social-projects/TeachUA | https://api.github.com/repos/ita-social-projects/TeachUA | closed | [API. Забули пароль] No security on back-end, for restoring password via Postman | bug Backend Priority: High | **Environment:** Windows 10, Google Chrome Version 92.0.4515.107 (Official Build) (64-bit)
**Reproducible:** always
**Build found:** last commit
**Preconditions**
1. Run Postman
2. Have an already registered user
**Steps to reproduce**
1. Select the method POST in Postman
2. Fill in URL field https://speak-ukrainian.org.ua/dev/api/resetpassword
3. Click on the Body tab and select Raw-> JSON format
4. Enter in Body field data
{"email":"<youremail@yourdomain.com>"}
**Actual result**
Status: 200. User receives verification code in Response Body and is able to restore password with this code on /api/verifyreset

**Expected result**
Status: 200. User does not receive verification code in Response Body
**User story link**
User story #577
| 1.0 | [API. Забули пароль] No security on back-end, for restoring password via Postman - **Environment:** Windows 10, Google Chrome Version 92.0.4515.107 (Official Build) (64-bit)
**Reproducible:** always
**Build found:** last commit
**Preconditions**
1. Run Postman
2. Have an already registered user
**Steps to reproduce**
1. Select the method POST in Postman
2. Fill in URL field https://speak-ukrainian.org.ua/dev/api/resetpassword
3. Click on the Body tab and select Raw-> JSON format
4. Enter in Body field data
{"email":"<youremail@yourdomain.com>"}
**Actual result**
Status: 200. User receives verification code in Response Body and is able to restore password with this code on /api/verifyreset

**Expected result**
Status: 200. User does not receive verification code in Response Body
**User story link**
User story #577
| priority | no security on back end for restoring password via postman environment windows google chrome version official build bit reproducible always build found last commit preconditions run postman have an already registered user steps to reproduce select the method post in postman fill in url field click on the body tab and select raw json format enter in body field data email actual result status user receives verification code in response body and is able to restore password with this code on api verifyreset expected result status user does not receive verification code in response body user story link user story | 1 |
288,117 | 8,826,234,481 | IssuesEvent | 2019-01-03 00:40:03 | Railcraft/Railcraft | https://api.github.com/repos/Railcraft/Railcraft | closed | Issue with first Blast Furnace | bug high priority machines and multiblocks | Using Railcraft 12.0.0 beta-1 for 1.12.2
(Also using Forestry along with Railcraft, Forge 14.23.5.2796)
After putting 1 iron ingot into the blast furnace, and after it makes steel, the progress arrow remains white, and breaking/replacing a block in the multiblock doesn't fix it immediately
After breaking/replacing, the multiblock will not form until repeating the same process a few times
**Expected behavior**
The progress bar should reset
| 1.0 | Issue with first Blast Furnace - Using Railcraft 12.0.0 beta-1 for 1.12.2
(Also using Forestry along with Railcraft, Forge 14.23.5.2796)
After putting 1 iron ingot into the blast furnace, and after it makes steel, the progress arrow remains white, and breaking/replacing a block in the multiblock doesn't fix it immediately
After breaking/replacing, the multiblock will not form until repeating the same process a few times
**Expected behavior**
The progress bar should reset
| priority | issue with first blast furnace using railcraft beta for also using forestry along with railcraft forge after putting iron ingot into the blast furnace and after it makes steel the progress arrow remains white and breaking replacing a block in the multiblock doesn t fix it immediately after breaking replacing the multiblock will not form until repeating the same process a few times expected behavior the progress bar should reset | 1 |
533,178 | 15,585,620,023 | IssuesEvent | 2021-03-18 00:03:58 | codebar/planner | https://api.github.com/repos/codebar/planner | closed | User Chapter `active` scope to ensure that only active chapters and subscriptions are rendered | Stale high-priority | ## Description of the issue 📄
Places to check and test
- [ ] /subscriptions
- [ ] /admin/members/[id]
- [ ] /profile
If you think of any other locations please add a comment to this issue
## Steps to fix 🛠
Using existing Chapter `active` scope
## To do 📋
* [ ] Claim this issue (comment below, or assign yourself if you are part of the codebar org)
* [ ] Fork and clone the repository
* [ ] Update the relevant files. Follow the steps to fix section in this issue.
* [ ] Commit your changes as one commit. Use the title of this issue as your commit message
* [ ] Submit a pull request
* [ ] Mention this issue in the PR description by including it's number
* [ ] Have your pull request reviewed & merged by a codebar team member
| 1.0 | User Chapter `active` scope to ensure that only active chapters and subscriptions are rendered - ## Description of the issue 📄
Places to check and test
- [ ] /subscriptions
- [ ] /admin/members/[id]
- [ ] /profile
If you think of any other locations please add a comment to this issue
## Steps to fix 🛠
Using existing Chapter `active` scope
## To do 📋
* [ ] Claim this issue (comment below, or assign yourself if you are part of the codebar org)
* [ ] Fork and clone the repository
* [ ] Update the relevant files. Follow the steps to fix section in this issue.
* [ ] Commit your changes as one commit. Use the title of this issue as your commit message
* [ ] Submit a pull request
* [ ] Mention this issue in the PR description by including it's number
* [ ] Have your pull request reviewed & merged by a codebar team member
| priority | user chapter active scope to ensure that only active chapters and subscriptions are rendered description of the issue 📄 places to check and test subscriptions admin members profile if you think of any other locations please add a comment to this issue steps to fix 🛠 using existing chapter active scope to do 📋 claim this issue comment below or assign yourself if you are part of the codebar org fork and clone the repository update the relevant files follow the steps to fix section in this issue commit your changes as one commit use the title of this issue as your commit message submit a pull request mention this issue in the pr description by including it s number have your pull request reviewed merged by a codebar team member | 1 |
568,199 | 16,962,251,092 | IssuesEvent | 2021-06-29 06:26:50 | InstituteforDiseaseModeling/covasim | https://api.github.com/repos/InstituteforDiseaseModeling/covasim | closed | Add the possibility to explicitly set the number of vaccinations | enhancement highpriority | **Is your feature request related to a problem? Please describe.**
Most countries make available data on the number of vaccinations carried out. Currently, it's hard to calibrate Covasim using those data, because the `vaccinate()` intervention works based on the `prob` parameter.
**Describe the solution you'd like**
It would be very useful to add the ability to explicitly set the number of vaccinations per day, similarly to the `test_num()` intervention. In addition, it would be important to retain the ability to target a specific subpopulation.
**Describe alternatives you've considered**
Use the `subtarget` parameter to explicilty target people, but it's hard to make it work if dynamic rescaling is on.
**Additional context**
An example of vaccinations data available [https://github.com/italia/covid19-opendata-vaccini](https://github.com/italia/covid19-opendata-vaccini).
| 1.0 | Add the possibility to explicitly set the number of vaccinations - **Is your feature request related to a problem? Please describe.**
Most countries make available data on the number of vaccinations carried out. Currently, it's hard to calibrate Covasim using those data, because the `vaccinate()` intervention works based on the `prob` parameter.
**Describe the solution you'd like**
It would be very useful to add the ability to explicitly set the number of vaccinations per day, similarly to the `test_num()` intervention. In addition, it would be important to retain the ability to target a specific subpopulation.
**Describe alternatives you've considered**
Use the `subtarget` parameter to explicilty target people, but it's hard to make it work if dynamic rescaling is on.
**Additional context**
An example of vaccinations data available [https://github.com/italia/covid19-opendata-vaccini](https://github.com/italia/covid19-opendata-vaccini).
| priority | add the possibility to explicitly set the number of vaccinations is your feature request related to a problem please describe most countries make available data on the number of vaccinations carried out currently it s hard to calibrate covasim using those data because the vaccinate intervention works based on the prob parameter describe the solution you d like it would be very useful to add the ability to explicitly set the number of vaccinations per day similarly to the test num intervention in addition it would be important to retain the ability to target a specific subpopulation describe alternatives you ve considered use the subtarget parameter to explicilty target people but it s hard to make it work if dynamic rescaling is on additional context an example of vaccinations data available | 1 |
316,739 | 9,654,323,772 | IssuesEvent | 2019-05-19 13:13:32 | skylines-project/skylines | https://api.github.com/repos/skylines-project/skylines | opened | Password recovery email is not being sent | api bug high-priority | for some reason the emails are not being received... 🤔 | 1.0 | Password recovery email is not being sent - for some reason the emails are not being received... 🤔 | priority | password recovery email is not being sent for some reason the emails are not being received 🤔 | 1 |
662,232 | 22,105,221,871 | IssuesEvent | 2022-06-01 16:28:52 | oncokb/oncokb | https://api.github.com/repos/oncokb/oncokb | opened | Support `mut` in point mutation format | high priority | We currently support `mut` for ranges, for mutations on the single position, we should have consequence matched as `any` | 1.0 | Support `mut` in point mutation format - We currently support `mut` for ranges, for mutations on the single position, we should have consequence matched as `any` | priority | support mut in point mutation format we currently support mut for ranges for mutations on the single position we should have consequence matched as any | 1 |
315,465 | 9,621,007,042 | IssuesEvent | 2019-05-14 09:37:55 | sahana/SAMBRO | https://api.github.com/repos/sahana/SAMBRO | closed | Multiple Info Blocks not allowed for same language | High Priority bug | **1. Multiple info blocks with same language codes (e.g. en-US)**
System does not check and warn for duplicate language set in two info blocks belonging to the same alert. It is possible to carry two info blocks (segments) with the same language if the event information refers to different parameters (e.g. a cyclone event carrying an additional info block indicating swell wave or storm surge related event information).
Currently if the user has two info blocks with the same language codes then the information displayed in the email and profile takes only the second info block values; i.e. select statement takes the either the first or the last info blocks data. Instead it should consider analyze the difference in the content to present all information. | 1.0 | Multiple Info Blocks not allowed for same language - **1. Multiple info blocks with same language codes (e.g. en-US)**
System does not check and warn for duplicate language set in two info blocks belonging to the same alert. It is possible to carry two info blocks (segments) with the same language if the event information refers to different parameters (e.g. a cyclone event carrying an additional info block indicating swell wave or storm surge related event information).
Currently if the user has two info blocks with the same language codes then the information displayed in the email and profile takes only the second info block values; i.e. select statement takes the either the first or the last info blocks data. Instead it should consider analyze the difference in the content to present all information. | priority | multiple info blocks not allowed for same language multiple info blocks with same language codes e g en us system does not check and warn for duplicate language set in two info blocks belonging to the same alert it is possible to carry two info blocks segments with the same language if the event information refers to different parameters e g a cyclone event carrying an additional info block indicating swell wave or storm surge related event information currently if the user has two info blocks with the same language codes then the information displayed in the email and profile takes only the second info block values i e select statement takes the either the first or the last info blocks data instead it should consider analyze the difference in the content to present all information | 1 |
50,072 | 3,006,169,485 | IssuesEvent | 2015-07-27 08:37:17 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | add scrollbars to highgui windows | auto-transferred category: highgui-gui feature priority: low | Transferred from http://code.opencv.org/issues/1229
```
|| Vadim Pisarevsky on 2011-07-17 10:23
|| Priority: Low
|| Affected: None
|| Category: highgui-gui
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
add scrollbars to highgui windows
-----------
```
in the case of large-resolution images the highui windows displaying them are getting big and do not fit the screen. adding scrollbars to solve the problem would be a useful addition.
the original request is here: #1225
```
History
-------
##### Alexander Shishkov on 2012-03-21 20:25
```
- Priority changed from Normal to Low
```
##### Andrey Kamaev on 2012-04-10 11:17
```
- Description changed from in the case of large-resolution images the
highui windows displaying them are... to in the case of
large-resolution images the highui windows displaying them are...
More
```
##### Xing Chen on 2012-07-06 00:40
```
I've been suffering from the problem for a long time. A scrollbar is really very very important for inspecting large images.
Currently, when I'm looking at a large image, I have to re-scale the image, which means that some details will be changed. If I don't resize the image, I can't even see some part of the image.
I really believe this should be a basic feature for an image viewer for scientific purposes. If somebody would raise it's priority and work on it, I will be very grateful. Thanks.
```
##### Xing Chen on 2012-07-06 03:59
```
Alternatively, if introducing a scrollbar is too hard, maybe we can add an option to show the status bar on the top-left of the window, so that one can at least know the basic information (position and RGB value) of the visible pixels.
```
##### Andrey Kamaev on 2012-08-16 15:36
```
- Category changed from highgui-images to highgui-gui
``` | 1.0 | add scrollbars to highgui windows - Transferred from http://code.opencv.org/issues/1229
```
|| Vadim Pisarevsky on 2011-07-17 10:23
|| Priority: Low
|| Affected: None
|| Category: highgui-gui
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
add scrollbars to highgui windows
-----------
```
in the case of large-resolution images the highui windows displaying them are getting big and do not fit the screen. adding scrollbars to solve the problem would be a useful addition.
the original request is here: #1225
```
History
-------
##### Alexander Shishkov on 2012-03-21 20:25
```
- Priority changed from Normal to Low
```
##### Andrey Kamaev on 2012-04-10 11:17
```
- Description changed from in the case of large-resolution images the
highui windows displaying them are... to in the case of
large-resolution images the highui windows displaying them are...
More
```
##### Xing Chen on 2012-07-06 00:40
```
I've been suffering from the problem for a long time. A scrollbar is really very very important for inspecting large images.
Currently, when I'm looking at a large image, I have to re-scale the image, which means that some details will be changed. If I don't resize the image, I can't even see some part of the image.
I really believe this should be a basic feature for an image viewer for scientific purposes. If somebody would raise it's priority and work on it, I will be very grateful. Thanks.
```
##### Xing Chen on 2012-07-06 03:59
```
Alternatively, if introducing a scrollbar is too hard, maybe we can add an option to show the status bar on the top-left of the window, so that one can at least know the basic information (position and RGB value) of the visible pixels.
```
##### Andrey Kamaev on 2012-08-16 15:36
```
- Category changed from highgui-images to highgui-gui
``` | priority | add scrollbars to highgui windows transferred from vadim pisarevsky on priority low affected none category highgui gui tracker feature difficulty none pr none platform none none add scrollbars to highgui windows in the case of large resolution images the highui windows displaying them are getting big and do not fit the screen adding scrollbars to solve the problem would be a useful addition the original request is here history alexander shishkov on priority changed from normal to low andrey kamaev on description changed from in the case of large resolution images the highui windows displaying them are to in the case of large resolution images the highui windows displaying them are more xing chen on i ve been suffering from the problem for a long time a scrollbar is really very very important for inspecting large images currently when i m looking at a large image i have to re scale the image which means that some details will be changed if i don t resize the image i can t even see some part of the image i really believe this should be a basic feature for an image viewer for scientific purposes if somebody would raise it s priority and work on it i will be very grateful thanks xing chen on alternatively if introducing a scrollbar is too hard maybe we can add an option to show the status bar on the top left of the window so that one can at least know the basic information position and rgb value of the visible pixels andrey kamaev on category changed from highgui images to highgui gui | 1 |
716,077 | 24,620,401,459 | IssuesEvent | 2022-10-15 21:15:40 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Corpses have read-able thoughts. (allowing dead to communicate with living) | In Game Exploit FUCK Priority: High Integrated Circuits | Reporting client version: 514.1582
<!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID: [192132](https://scrubby.melonmesa.com/round/192132)
<!--- **INCLUDE THE ROUND ID**
If you discovered this issue from playing tgstation hosted servers:
[Round ID]: # (It can be found in the Status panel or retrieved from https://sb.atlantaned.space/rounds ! The round id let's us look up valuable information and logs for the round the bug happened.)-->
## Reproduction:
Set up a thought reader bci circuit that can retrigger itself while the host is dead.
It accepts input from dead players (the only real limitation I think is you need to return to your body before you submit the text)
https://www.youtube.com/watch?v=cKi5FR0OPfU
<!-- **For Admins:** Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them. -->
| 1.0 | Corpses have read-able thoughts. (allowing dead to communicate with living) - Reporting client version: 514.1582
<!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID: [192132](https://scrubby.melonmesa.com/round/192132)
<!--- **INCLUDE THE ROUND ID**
If you discovered this issue from playing tgstation hosted servers:
[Round ID]: # (It can be found in the Status panel or retrieved from https://sb.atlantaned.space/rounds ! The round id let's us look up valuable information and logs for the round the bug happened.)-->
## Reproduction:
Set up a thought reader bci circuit that can retrigger itself while the host is dead.
It accepts input from dead players (the only real limitation I think is you need to return to your body before you submit the text)
https://www.youtube.com/watch?v=cKi5FR0OPfU
<!-- **For Admins:** Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them. -->
| priority | corpses have read able thoughts allowing dead to communicate with living reporting client version round id include the round id if you discovered this issue from playing tgstation hosted servers it can be found in the status panel or retrieved from the round id let s us look up valuable information and logs for the round the bug happened reproduction set up a thought reader bci circuit that can retrigger itself while the host is dead it accepts input from dead players the only real limitation i think is you need to return to your body before you submit the text | 1 |
537,797 | 15,737,342,542 | IssuesEvent | 2021-03-30 02:42:37 | CLIxIndia-Dev/CLIxDashboard | https://api.github.com/repos/CLIxIndia-Dev/CLIxDashboard | opened | Submit Button Design Change | bug frontend high priority low severity | **Change Case in Title**
As a Test Engineer, I want to see the submit button design changed so that it changes the look and feel of website.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://staging-clixdashboard.tiss.edu/home'
2. Scroll to Submit Button
3. See error
**Expected behavior**
Button Shadow should look cleaner/lighter. Be free to explore other design too
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows]
- Browser [chrome]
- Version [89.0]
| 1.0 | Submit Button Design Change - **Change Case in Title**
As a Test Engineer, I want to see the submit button design changed so that it changes the look and feel of website.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://staging-clixdashboard.tiss.edu/home'
2. Scroll to Submit Button
3. See error
**Expected behavior**
Button Shadow should look cleaner/lighter. Be free to explore other design too
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows]
- Browser [chrome]
- Version [89.0]
| priority | submit button design change change case in title as a test engineer i want to see the submit button design changed so that it changes the look and feel of website to reproduce steps to reproduce the behavior go to scroll to submit button see error expected behavior button shadow should look cleaner lighter be free to explore other design too screenshots desktop please complete the following information os browser version | 1 |
164,081 | 6,219,218,864 | IssuesEvent | 2017-07-09 11:28:48 | SilphRoad/reports | https://api.github.com/repos/SilphRoad/reports | closed | Blank map, doesn't work iOS 9.3.2 | bug priority:high silph:radio | iOS 9.3.2, iPhone 6S, Safari and Chrome. Testing in fast WiFi. Refreshing doesn't help. There is no zoom UI. Pinch zoom out doesn't work. "Fire a raid beacon" seems to do nothing. Map area is white. | 1.0 | Blank map, doesn't work iOS 9.3.2 - iOS 9.3.2, iPhone 6S, Safari and Chrome. Testing in fast WiFi. Refreshing doesn't help. There is no zoom UI. Pinch zoom out doesn't work. "Fire a raid beacon" seems to do nothing. Map area is white. | priority | blank map doesn t work ios ios iphone safari and chrome testing in fast wifi refreshing doesn t help there is no zoom ui pinch zoom out doesn t work fire a raid beacon seems to do nothing map area is white | 1 |
226,242 | 7,511,176,236 | IssuesEvent | 2018-04-11 05:10:06 | CS2103JAN2018-W14-B1/main | https://api.github.com/repos/CS2103JAN2018-W14-B1/main | closed | As a teacher, I want to create a class | Priority.high type.story | ... so that I can group and manage students who are taking the same class
| 1.0 | As a teacher, I want to create a class - ... so that I can group and manage students who are taking the same class
| priority | as a teacher i want to create a class so that i can group and manage students who are taking the same class | 1 |
314,348 | 9,595,748,981 | IssuesEvent | 2019-05-09 16:48:38 | Signbank/Global-signbank | https://api.github.com/repos/Signbank/Global-signbank | reopened | No phonology info in list view on ASL Signbank | ASL high priority | 
While the phonology info is definitely there:

| 1.0 | No phonology info in list view on ASL Signbank - 
While the phonology info is definitely there:

| priority | no phonology info in list view on asl signbank while the phonology info is definitely there | 1 |
458,794 | 13,181,750,990 | IssuesEvent | 2020-08-12 14:46:43 | CDH-Studio/UpSkill | https://api.github.com/repos/CDH-Studio/UpSkill | closed | remove accents in search for schools | Client Request High priority enhancement | **Is your feature request related to a problem? Please describe.**
Several schools have accents on both the english and french list of schools. Some users forget to use accents when searching for a school and consequently can't find the school they are looking for.
Suggestion
Remove the need to consider accents when searching for schools. | 1.0 | remove accents in search for schools - **Is your feature request related to a problem? Please describe.**
Several schools have accents on both the english and french list of schools. Some users forget to use accents when searching for a school and consequently can't find the school they are looking for.
Suggestion
Remove the need to consider accents when searching for schools. | priority | remove accents in search for schools is your feature request related to a problem please describe several schools have accents on both the english and french list of schools some users forget to use accents when searching for a school and consequently can t find the school they are looking for suggestion remove the need to consider accents when searching for schools | 1 |
139,451 | 5,375,722,418 | IssuesEvent | 2017-02-23 06:16:36 | aodn/aatams | https://api.github.com/repos/aodn/aatams | closed | 'Custom Validation' ERROR when deploying a receiver | bug high priority in progress | Migrated from https://github.com/aodn/aatams-content/issues/60, originally reported by @fjaine.
@xhoenner I tried to deploy two receivers as part of the AATAMS Glenelg installation:
Receiver 101686: deployed 4/8/16 at GL 7 Station
Receiver 101679: deployed 4/8/16 at GL 8 Station
There is no deployment time available for these deployments (not recorded on the day) so I left it as 00:00 UTC.
When creating the deployment, the following error appeared. Any idea what's preventing me from deploying these receivers?

| 1.0 | 'Custom Validation' ERROR when deploying a receiver - Migrated from https://github.com/aodn/aatams-content/issues/60, originally reported by @fjaine.
@xhoenner I tried to deploy two receivers as part of the AATAMS Glenelg installation:
Receiver 101686: deployed 4/8/16 at GL 7 Station
Receiver 101679: deployed 4/8/16 at GL 8 Station
There is no deployment time available for these deployments (not recorded on the day) so I left it as 00:00 UTC.
When creating the deployment, the following error appeared. Any idea what's preventing me from deploying these receivers?

| priority | custom validation error when deploying a receiver migrated from originally reported by fjaine xhoenner i tried to deploy two receivers as part of the aatams glenelg installation receiver deployed at gl station receiver deployed at gl station there is no deployment time available for these deployments not recorded on the day so i left it as utc when creating the deployment the following error appeared any idea what s preventing me from deploying these receivers | 1 |
243,243 | 7,854,652,583 | IssuesEvent | 2018-06-20 21:30:55 | sul-dlss/preservation_catalog | https://api.github.com/repos/sul-dlss/preservation_catalog | closed | (C2M, M2C) make sure cron jobs ran successfully | catalog to moab high priority in progress moab to catalog | we scheduled the running of checks on a regular basis. we should make sure they ran. | 1.0 | (C2M, M2C) make sure cron jobs ran successfully - we scheduled the running of checks on a regular basis. we should make sure they ran. | priority | make sure cron jobs ran successfully we scheduled the running of checks on a regular basis we should make sure they ran | 1 |
559,069 | 16,549,311,747 | IssuesEvent | 2021-05-28 06:31:50 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | Baseline duration is calculated differently than for regular tasks | bug forum high-priority resolved | Reported here: https://www.bryntum.com/forum/viewtopic.php?f=52&t=17267
Duration for baseline is calculated using different rules than for regular tasks:
1. Calendars are not taken into account when showing baselines duration
2. Baselines duration is calculated w/o using conversion ratios defined for the project. So for them 1 day always means 24hrs while the project could define it as 8hrs.
Can be seen in the baselines demo in the gantt. The demo shows tooltips for tasks and baselines and duration there is not consistent.
| 1.0 | Baseline duration is calculated differently than for regular tasks - Reported here: https://www.bryntum.com/forum/viewtopic.php?f=52&t=17267
Duration for baseline is calculated using different rules than for regular tasks:
1. Calendars are not taken into account when showing baselines duration
2. Baselines duration is calculated w/o using conversion ratios defined for the project. So for them 1 day always means 24hrs while the project could define it as 8hrs.
Can be seen in the baselines demo in the gantt. The demo shows tooltips for tasks and baselines and duration there is not consistent.
| priority | baseline duration is calculated differently than for regular tasks reported here duration for baseline is calculated using different rules than for regular tasks calendars are not taken into account when showing baselines duration baselines duration is calculated w o using conversion ratios defined for the project so for them day always means while the project could define it as can be seen in the baselines demo in the gantt the demo shows tooltips for tasks and baselines and duration there is not consistent | 1 |
384,267 | 11,386,394,977 | IssuesEvent | 2020-01-29 13:11:59 | luna/ide | https://api.github.com/repos/luna/ide | closed | Adding lints to the codebase. | Category: Performance Change: Non-Breaking Difficulty: Intermediate Priority: High Type: Enhancement | ### Summary
The following lints should be enabled in all of our libs:
```
#![warn(unsafe_code)]
#![warn(missing_copy_implementations)]
#![warn(missing_debug_implementations)]
```
This task is about adding them to all `lib.rs` and fixing the code where necessary. The code which will fail with `unsafe_code` should be fixed by adding `allow(unsafe_code)` directly on the function which uses unsafe code to indicate the intention.
### Value
- Better codebase.
- Higher-quality code.
### Acceptance Criteria & Test Cases
- Passing CI.
| 1.0 | Adding lints to the codebase. - ### Summary
The following lints should be enabled in all of our libs:
```
#![warn(unsafe_code)]
#![warn(missing_copy_implementations)]
#![warn(missing_debug_implementations)]
```
This task is about adding them to all `lib.rs` and fixing the code where necessary. The code which will fail with `unsafe_code` should be fixed by adding `allow(unsafe_code)` directly on the function which uses unsafe code to indicate the intention.
### Value
- Better codebase.
- Higher-quality code.
### Acceptance Criteria & Test Cases
- Passing CI.
| priority | adding lints to the codebase summary the following lints should be enabled in all of our libs this task is about adding them to all lib rs and fixing the code where necessary the code which will fail with unsafe code should be fixed by adding allow unsafe code directly on the function which uses unsafe code to indicate the intention value better codebase higher quality code acceptance criteria test cases passing ci | 1 |
653,949 | 21,631,618,359 | IssuesEvent | 2022-05-05 10:18:17 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Gradient tests failing for `max_unpool2d` | high priority module: autograd module: nn triaged actionable module: correctness (silent) module: pooling | ## 🐛 Bug
While trying to add OpInfo for `max_unpool2d`, the following errors were raised:
Caution: long snippet alert!!
<details>
```python
======================================================================
ERROR: test_fn_grad_nn_functional_max_unpool2d_cpu_float64 (__main__.TestGradientsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 603, in test_fn_grad
self._grad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 586, in _grad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradcheck', check_forward_ad=check_forward_ad,
File "/home/krshrimali/pytorch/test/test_ops.py", line 559, in _check_helper
self.assertTrue(gradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2718, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor(0.2397)
analytical:tensor(0.8019)
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000]])
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 1.],
[0., 0., 0., ..., 0., 0., 1.]])
The max per-element difference (slow mode) is: 1.0.
======================================================================
ERROR: test_fn_gradgrad_nn_functional_max_unpool2d_cpu_float64 (__main__.TestGradientsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 625, in test_fn_gradgrad
self._gradgrad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 591, in _gradgrad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradgradcheck')
File "/home/krshrimali/pytorch/test/test_ops.py", line 569, in _check_helper
self.assertTrue(gradgradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2738, in gradgradcheck
return torch.autograd.gradgradcheck(fn, inputs, grad_outputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1402, in gradgradcheck
return gradcheck(
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor(0.7871)
analytical:tensor(0.2123)
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 1.0000]])
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 1.]])
The max per-element difference (slow mode) is: 1.0000000000000009.
======================================================================
ERROR: test_fn_grad_nn_functional_max_unpool2d_cuda_float64 (__main__.TestGradientsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 603, in test_fn_grad
self._grad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 586, in _grad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradcheck', check_forward_ad=check_forward_ad,
File "/home/krshrimali/pytorch/test/test_ops.py", line 559, in _check_helper
self.assertTrue(gradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2718, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor(0.3207, device='cuda:0')
analytical:tensor(0.9777, device='cuda:0')
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 1., 0.]], device='cuda:0')
The max per-element difference (slow mode) is: 1.0.
======================================================================
ERROR: test_fn_gradgrad_nn_functional_max_unpool2d_cuda_float64 (__main__.TestGradientsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 625, in test_fn_gradgrad
self._gradgrad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 591, in _gradgrad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradgradcheck')
File "/home/krshrimali/pytorch/test/test_ops.py", line 569, in _check_helper
self.assertTrue(gradgradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2738, in gradgradcheck
return torch.autograd.gradgradcheck(fn, inputs, grad_outputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1402, in gradgradcheck
return gradcheck(
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor(1.0067, device='cuda:0')
analytical:tensor(0.3634, device='cuda:0')
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 1.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 1., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:0')
The max per-element difference (slow mode) is: 1.0000000000000009.
```
</details>
## To Reproduce
Steps to reproduce the behavior:
1. Checkout to the branch: https://github.com/krshrimali/pytorch/tree/origin/max_unpoolNd
2. Run the OpInfo tests after removing the skips for `max_unpool2d`. `python3 test/test_ops.py -k max_unpool2d`
## Expected behavior
The tests should pass.
## Additional context
Please see #67328 and https://github.com/pytorch/pytorch/pull/67328 for the discussion.
cc: @albanD @nikitaved
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @walterddr | 1.0 | Gradient tests failing for `max_unpool2d` - ## 🐛 Bug
While trying to add OpInfo for `max_unpool2d`, the following errors were raised:
Caution: long snippet alert!!
<details>
```python
======================================================================
ERROR: test_fn_grad_nn_functional_max_unpool2d_cpu_float64 (__main__.TestGradientsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 603, in test_fn_grad
self._grad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 586, in _grad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradcheck', check_forward_ad=check_forward_ad,
File "/home/krshrimali/pytorch/test/test_ops.py", line 559, in _check_helper
self.assertTrue(gradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2718, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor(0.2397)
analytical:tensor(0.8019)
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000]])
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 1.],
[0., 0., 0., ..., 0., 0., 1.]])
The max per-element difference (slow mode) is: 1.0.
======================================================================
ERROR: test_fn_gradgrad_nn_functional_max_unpool2d_cpu_float64 (__main__.TestGradientsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 625, in test_fn_gradgrad
self._gradgrad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 591, in _gradgrad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradgradcheck')
File "/home/krshrimali/pytorch/test/test_ops.py", line 569, in _check_helper
self.assertTrue(gradgradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2738, in gradgradcheck
return torch.autograd.gradgradcheck(fn, inputs, grad_outputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1402, in gradgradcheck
return gradcheck(
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor(0.7871)
analytical:tensor(0.2123)
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 1.0000]])
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 1.]])
The max per-element difference (slow mode) is: 1.0000000000000009.
======================================================================
ERROR: test_fn_grad_nn_functional_max_unpool2d_cuda_float64 (__main__.TestGradientsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 603, in test_fn_grad
self._grad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 586, in _grad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradcheck', check_forward_ad=check_forward_ad,
File "/home/krshrimali/pytorch/test/test_ops.py", line 559, in _check_helper
self.assertTrue(gradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2718, in gradcheck
return torch.autograd.gradcheck(fn, inputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor(0.3207, device='cuda:0')
analytical:tensor(0.9777, device='cuda:0')
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
[0., 0., 1., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 1., 0.]], device='cuda:0')
The max per-element difference (slow mode) is: 1.0.
======================================================================
ERROR: test_fn_gradgrad_nn_functional_max_unpool2d_cuda_float64 (__main__.TestGradientsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 1422, in wrapper
method(*args, **kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test
raise rte
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 371, in instantiated_test
result = test(self, **param_kwargs)
File "/home/krshrimali/pytorch/torch/testing/_internal/common_device_type.py", line 737, in test_wrapper
return test(*args, **kwargs)
File "/home/krshrimali/pytorch/test/test_ops.py", line 625, in test_fn_gradgrad
self._gradgrad_test_helper(device, dtype, op, op.get_op())
File "/home/krshrimali/pytorch/test/test_ops.py", line 591, in _gradgrad_test_helper
return self._check_helper(device, dtype, op, variant, 'gradgradcheck')
File "/home/krshrimali/pytorch/test/test_ops.py", line 569, in _check_helper
self.assertTrue(gradgradcheck(fn, gradcheck_args,
File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2738, in gradgradcheck
return torch.autograd.gradgradcheck(fn, inputs, grad_outputs, **kwargs)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1402, in gradgradcheck
return gradcheck(
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1273, in gradcheck
return _gradcheck_helper(**args)
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1286, in _gradcheck_helper
_gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 947, in _gradcheck_real_imag
gradcheck_fn(func, func_out, tupled_inputs, outputs, eps,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1168, in _fast_gradcheck
_check_analytical_numerical_equal(analytical_vJu, numerical_vJu, complex_indices,
File "/home/krshrimali/pytorch/torch/autograd/gradcheck.py", line 1148, in _check_analytical_numerical_equal
raise GradcheckError(_get_notallclose_msg(a, n, j, i, complex_indices, test_imag, is_forward_ad) + jacobians_str)
torch.autograd.gradcheck.GradcheckError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor(1.0067, device='cuda:0')
analytical:tensor(0.3634, device='cuda:0')
The above quantities relating the numerical and analytical jacobians are computed
in fast mode. See: https://github.com/pytorch/pytorch/issues/53876 for more background
about fast mode. Below, we recompute numerical and analytical jacobians in slow mode:
Numerical:
tensor([[1.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 1.0000, 1.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0000, 1.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
Analytical:
tensor([[1., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 1., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 1., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:0')
The max per-element difference (slow mode) is: 1.0000000000000009.
```
</details>
## To Reproduce
Steps to reproduce the behavior:
1. Checkout to the branch: https://github.com/krshrimali/pytorch/tree/origin/max_unpoolNd
2. Run the OpInfo tests after removing the skips for `max_unpool2d`. `python3 test/test_ops.py -k max_unpool2d`
## Expected behavior
The tests should pass.
## Additional context
Please see #67328 and https://github.com/pytorch/pytorch/pull/67328 for the discussion.
cc: @albanD @nikitaved
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @albanD @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @mruberry @walterddr | priority | gradient tests failing for max 🐛 bug while trying to add opinfo for max the following errors were raised caution long snippet alert python error test fn grad nn functional max cpu main testgradientscpu traceback most recent call last file home krshrimali pytorch torch testing internal common device type py line in instantiated test raise rte file home krshrimali pytorch torch testing internal common device type py line in instantiated test result test self param kwargs file home krshrimali pytorch torch testing internal common device type py line in test wrapper return test args kwargs file home krshrimali pytorch test test ops py line in test fn grad self grad test helper device dtype op op get op file home krshrimali pytorch test test ops py line in grad test helper return self check helper device dtype op variant gradcheck check forward ad check forward ad file home krshrimali pytorch test test ops py line in check helper self asserttrue gradcheck fn gradcheck args file home krshrimali pytorch torch testing internal common utils py line in gradcheck return torch autograd gradcheck fn inputs kwargs file home krshrimali pytorch torch autograd gradcheck py line in gradcheck return gradcheck helper args file home krshrimali pytorch torch autograd gradcheck py line in gradcheck helper gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in fast gradcheck check analytical numerical equal analytical vju numerical vju complex indices file home krshrimali pytorch torch autograd gradcheck py line in check analytical numerical equal raise gradcheckerror get notallclose msg a n j i complex indices test imag is forward ad jacobians str torch autograd gradcheck gradcheckerror jacobian mismatch for output with respect to input numerical tensor analytical tensor the above quantities relating the numerical and analytical jacobians are computed in fast mode see for more background about fast mode below we recompute numerical and analytical jacobians in slow mode numerical tensor analytical tensor the max per element difference slow mode is error test fn gradgrad nn functional max cpu main testgradientscpu traceback most recent call last file home krshrimali pytorch torch testing internal common device type py line in instantiated test raise rte file home krshrimali pytorch torch testing internal common device type py line in instantiated test result test self param kwargs file home krshrimali pytorch torch testing internal common device type py line in test wrapper return test args kwargs file home krshrimali pytorch test test ops py line in test fn gradgrad self gradgrad test helper device dtype op op get op file home krshrimali pytorch test test ops py line in gradgrad test helper return self check helper device dtype op variant gradgradcheck file home krshrimali pytorch test test ops py line in check helper self asserttrue gradgradcheck fn gradcheck args file home krshrimali pytorch torch testing internal common utils py line in gradgradcheck return torch autograd gradgradcheck fn inputs grad outputs kwargs file home krshrimali pytorch torch autograd gradcheck py line in gradgradcheck return gradcheck file home krshrimali pytorch torch autograd gradcheck py line in gradcheck return gradcheck helper args file home krshrimali pytorch torch autograd gradcheck py line in gradcheck helper gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in fast gradcheck check analytical numerical equal analytical vju numerical vju complex indices file home krshrimali pytorch torch autograd gradcheck py line in check analytical numerical equal raise gradcheckerror get notallclose msg a n j i complex indices test imag is forward ad jacobians str torch autograd gradcheck gradcheckerror jacobian mismatch for output with respect to input numerical tensor analytical tensor the above quantities relating the numerical and analytical jacobians are computed in fast mode see for more background about fast mode below we recompute numerical and analytical jacobians in slow mode numerical tensor analytical tensor the max per element difference slow mode is error test fn grad nn functional max cuda main testgradientscuda traceback most recent call last file home krshrimali pytorch torch testing internal common utils py line in wrapper method args kwargs file home krshrimali pytorch torch testing internal common utils py line in wrapper method args kwargs file home krshrimali pytorch torch testing internal common device type py line in instantiated test raise rte file home krshrimali pytorch torch testing internal common device type py line in instantiated test result test self param kwargs file home krshrimali pytorch torch testing internal common device type py line in test wrapper return test args kwargs file home krshrimali pytorch test test ops py line in test fn grad self grad test helper device dtype op op get op file home krshrimali pytorch test test ops py line in grad test helper return self check helper device dtype op variant gradcheck check forward ad check forward ad file home krshrimali pytorch test test ops py line in check helper self asserttrue gradcheck fn gradcheck args file home krshrimali pytorch torch testing internal common utils py line in gradcheck return torch autograd gradcheck fn inputs kwargs file home krshrimali pytorch torch autograd gradcheck py line in gradcheck return gradcheck helper args file home krshrimali pytorch torch autograd gradcheck py line in gradcheck helper gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in fast gradcheck check analytical numerical equal analytical vju numerical vju complex indices file home krshrimali pytorch torch autograd gradcheck py line in check analytical numerical equal raise gradcheckerror get notallclose msg a n j i complex indices test imag is forward ad jacobians str torch autograd gradcheck gradcheckerror jacobian mismatch for output with respect to input numerical tensor device cuda analytical tensor device cuda the above quantities relating the numerical and analytical jacobians are computed in fast mode see for more background about fast mode below we recompute numerical and analytical jacobians in slow mode numerical tensor device cuda analytical tensor device cuda the max per element difference slow mode is error test fn gradgrad nn functional max cuda main testgradientscuda traceback most recent call last file home krshrimali pytorch torch testing internal common utils py line in wrapper method args kwargs file home krshrimali pytorch torch testing internal common utils py line in wrapper method args kwargs file home krshrimali pytorch torch testing internal common device type py line in instantiated test raise rte file home krshrimali pytorch torch testing internal common device type py line in instantiated test result test self param kwargs file home krshrimali pytorch torch testing internal common device type py line in test wrapper return test args kwargs file home krshrimali pytorch test test ops py line in test fn gradgrad self gradgrad test helper device dtype op op get op file home krshrimali pytorch test test ops py line in gradgrad test helper return self check helper device dtype op variant gradgradcheck file home krshrimali pytorch test test ops py line in check helper self asserttrue gradgradcheck fn gradcheck args file home krshrimali pytorch torch testing internal common utils py line in gradgradcheck return torch autograd gradgradcheck fn inputs grad outputs kwargs file home krshrimali pytorch torch autograd gradcheck py line in gradgradcheck return gradcheck file home krshrimali pytorch torch autograd gradcheck py line in gradcheck return gradcheck helper args file home krshrimali pytorch torch autograd gradcheck py line in gradcheck helper gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in gradcheck real imag gradcheck fn func func out tupled inputs outputs eps file home krshrimali pytorch torch autograd gradcheck py line in fast gradcheck check analytical numerical equal analytical vju numerical vju complex indices file home krshrimali pytorch torch autograd gradcheck py line in check analytical numerical equal raise gradcheckerror get notallclose msg a n j i complex indices test imag is forward ad jacobians str torch autograd gradcheck gradcheckerror jacobian mismatch for output with respect to input numerical tensor device cuda analytical tensor device cuda the above quantities relating the numerical and analytical jacobians are computed in fast mode see for more background about fast mode below we recompute numerical and analytical jacobians in slow mode numerical tensor device cuda analytical tensor device cuda the max per element difference slow mode is to reproduce steps to reproduce the behavior checkout to the branch run the opinfo tests after removing the skips for max test test ops py k max expected behavior the tests should pass additional context please see and for the discussion cc alband nikitaved cc ezyang gchanan bdhirsh jbschlosser alband gqchen pearu nikitaved soulitzer lezcano mruberry walterddr | 1 |
236,188 | 7,747,304,596 | IssuesEvent | 2018-05-30 02:29:42 | BuiltBrokenModding/Atomic-Science | https://api.github.com/repos/BuiltBrokenModding/Atomic-Science | closed | [1.7] Update processing machines | high priority | Update machines used to turn ore blocks into reactor fuel
- [x] Build flow chart of recipes from old code
- [x] Build designs of usage from old code
- [x] Redesign for usability
- [x] Implement 'Centrifuge' Logic
- [x] Implement 'Nuclear Boiler' Logic
- [x] Implement 'Chmical Extractor' Logic
| 1.0 | [1.7] Update processing machines - Update machines used to turn ore blocks into reactor fuel
- [x] Build flow chart of recipes from old code
- [x] Build designs of usage from old code
- [x] Redesign for usability
- [x] Implement 'Centrifuge' Logic
- [x] Implement 'Nuclear Boiler' Logic
- [x] Implement 'Chmical Extractor' Logic
| priority | update processing machines update machines used to turn ore blocks into reactor fuel build flow chart of recipes from old code build designs of usage from old code redesign for usability implement centrifuge logic implement nuclear boiler logic implement chmical extractor logic | 1 |
55,327 | 3,072,826,766 | IssuesEvent | 2015-08-19 18:50:35 | biocore/emperor | https://api.github.com/repos/biocore/emperor | opened | Biplot vectors can't be recolored | bug GUI high-priority | There's no way to change the color of the vector lines when generating a biplot. | 1.0 | Biplot vectors can't be recolored - There's no way to change the color of the vector lines when generating a biplot. | priority | biplot vectors can t be recolored there s no way to change the color of the vector lines when generating a biplot | 1 |
368,113 | 10,866,347,011 | IssuesEvent | 2019-11-14 21:02:13 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | 500: Server error when loading Ubuntu blog feed | Priority: High | STR:
1. Open Ubuntu blog feed https://ubuntu.com/blog/feed.
1. Get 500.

At this moment it block building of https://github.com/UbuntuCZ/ubuntu-cz/, as we fetch the RSS feed there to display on the main page. It started between 21:20 and 23:20 UTC yesterday (Nov 13). | 1.0 | 500: Server error when loading Ubuntu blog feed - STR:
1. Open Ubuntu blog feed https://ubuntu.com/blog/feed.
1. Get 500.

At this moment it block building of https://github.com/UbuntuCZ/ubuntu-cz/, as we fetch the RSS feed there to display on the main page. It started between 21:20 and 23:20 UTC yesterday (Nov 13). | priority | server error when loading ubuntu blog feed str open ubuntu blog feed get at this moment it block building of as we fetch the rss feed there to display on the main page it started between and utc yesterday nov | 1 |
537,650 | 15,732,541,642 | IssuesEvent | 2021-03-29 18:25:28 | DDMAL/mei-mapping-tool | https://api.github.com/repos/DDMAL/mei-mapping-tool | closed | Cannot find module 'exceljs' | bug high priority | When trying to upload the example XLSX snippet provided, this error pops up, and the resource is not successfully uploaded.
The whole error is:
```
Cannot find module 'exceljs' Require stack: - /var/lib/cress/routes/router.js - /var/lib/cress/app.js - /var/lib/cress/bin/www
``` | 1.0 | Cannot find module 'exceljs' - When trying to upload the example XLSX snippet provided, this error pops up, and the resource is not successfully uploaded.
The whole error is:
```
Cannot find module 'exceljs' Require stack: - /var/lib/cress/routes/router.js - /var/lib/cress/app.js - /var/lib/cress/bin/www
``` | priority | cannot find module exceljs when trying to upload the example xlsx snippet provided this error pops up and the resource is not successfully uploaded the whole error is cannot find module exceljs require stack var lib cress routes router js var lib cress app js var lib cress bin www | 1 |
734,975 | 25,372,940,184 | IssuesEvent | 2022-11-21 11:59:13 | CAFECA-IO/TideBitEx | https://api.github.com/repos/CAFECA-IO/TideBitEx | closed | [BUG] 用戶 mettleboy@gmail.com 反饋在market 看不到自己的order,但是有資產被鎖定 | bug 8 high priority to be verified | 事件調查報告:
- mettleboy@gmail.com 七日內(11/08 - 11/15)操作行為
- mettleboy@gmail.com 七日內(11/08 - 11/15)餘額變更事件
- mettleboy@gmail.com 截至 11/14 23:59:59 掛單簿狀態
- mettleboy@gmail.com 帳戶餘額最新狀態
- OKX API response 分析
- 異常原因分析
- 已知帳戶異常狀態修正方法
- 系統錯誤排除方法 | 1.0 | [BUG] 用戶 mettleboy@gmail.com 反饋在market 看不到自己的order,但是有資產被鎖定 - 事件調查報告:
- mettleboy@gmail.com 七日內(11/08 - 11/15)操作行為
- mettleboy@gmail.com 七日內(11/08 - 11/15)餘額變更事件
- mettleboy@gmail.com 截至 11/14 23:59:59 掛單簿狀態
- mettleboy@gmail.com 帳戶餘額最新狀態
- OKX API response 分析
- 異常原因分析
- 已知帳戶異常狀態修正方法
- 系統錯誤排除方法 | priority | 用戶 mettleboy gmail com 反饋在market 看不到自己的order,但是有資產被鎖定 事件調查報告: mettleboy gmail com 七日內( )操作行為 mettleboy gmail com 七日內( )餘額變更事件 mettleboy gmail com 截至 掛單簿狀態 mettleboy gmail com 帳戶餘額最新狀態 okx api response 分析 異常原因分析 已知帳戶異常狀態修正方法 系統錯誤排除方法 | 1 |
202,494 | 7,048,472,849 | IssuesEvent | 2018-01-02 17:49:33 | 18F/openFEC | https://api.github.com/repos/18F/openFEC | closed | Investigate delay in MUR/AO refresh documents appearing | High priority | We're having an issue where after AO's and MURs are refreshed, newly published documents are appearing intermittently for approximately 40 minutes.
- [x] Document ID's for legal docs, which come from postgres but seem to be changing over time: `select document_id, category, filename from aouser.document where ao_id = 4530;`
- [x] Figure out why the document ID's are changing
- [x] Research logs to see if there's anything helpful there (See PR https://github.com/18F/openFEC/pull/2821)
- [ ] Try to replicate the issue
Example of ID's changing:
12/14/2017 | Comment on Agenda Document No. 17-59-B by Campaign Legal Center
*was* https://www.fec.gov/files/legal/aos/83657.pdf (ID 83657) is now https://www.fec.gov/files/legal/aos/83664.pdf (ID 83664)
## Before

## After

| 1.0 | Investigate delay in MUR/AO refresh documents appearing - We're having an issue where after AO's and MURs are refreshed, newly published documents are appearing intermittently for approximately 40 minutes.
- [x] Document ID's for legal docs, which come from postgres but seem to be changing over time: `select document_id, category, filename from aouser.document where ao_id = 4530;`
- [x] Figure out why the document ID's are changing
- [x] Research logs to see if there's anything helpful there (See PR https://github.com/18F/openFEC/pull/2821)
- [ ] Try to replicate the issue
Example of ID's changing:
12/14/2017 | Comment on Agenda Document No. 17-59-B by Campaign Legal Center
*was* https://www.fec.gov/files/legal/aos/83657.pdf (ID 83657) is now https://www.fec.gov/files/legal/aos/83664.pdf (ID 83664)
## Before

## After

| priority | investigate delay in mur ao refresh documents appearing we re having an issue where after ao s and murs are refreshed newly published documents are appearing intermittently for approximately minutes document id s for legal docs which come from postgres but seem to be changing over time select document id category filename from aouser document where ao id figure out why the document id s are changing research logs to see if there s anything helpful there see pr try to replicate the issue example of id s changing comment on agenda document no b by campaign legal center was id is now id before after | 1 |
423,275 | 12,293,665,392 | IssuesEvent | 2020-05-10 20:02:35 | Normal-OJ/pyShare-be | https://api.github.com/repos/Normal-OJ/pyShare-be | closed | GET 500: when get student statistics | HIGH PRIORITY bug | `replies` 跟 `liked` 都應該只看留言,也就是 `c.depth == 0`
```
[2020-05-10 19:49:39 +0000] [11] [ERROR] Exception on /course/course_108-1/statistic [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/model/utils/request.py", line 77, in real_wrapper
return inner_wrapper(*args, **ks)
File "/app/mongo/utils.py", line 52, in wrapper
return func(*args, **ks)
File "/app/model/utils/request.py", line 72, in inner_wrapper
return func(*args, **ks)
File "/app/model/course.py", line 43, in statistic
s = u.statistic()
File "/app/mongo/user.py", line 206, in statistic
ret['replies'] = [{
File "/app/mongo/user.py", line 207, in <listcomp>
'course': c.problem.course.name,
AttributeError: 'NoneType' object has no attribute 'course'
``` | 1.0 | GET 500: when get student statistics - `replies` 跟 `liked` 都應該只看留言,也就是 `c.depth == 0`
```
[2020-05-10 19:49:39 +0000] [11] [ERROR] Exception on /course/course_108-1/statistic [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/model/utils/request.py", line 77, in real_wrapper
return inner_wrapper(*args, **ks)
File "/app/mongo/utils.py", line 52, in wrapper
return func(*args, **ks)
File "/app/model/utils/request.py", line 72, in inner_wrapper
return func(*args, **ks)
File "/app/model/course.py", line 43, in statistic
s = u.statistic()
File "/app/mongo/user.py", line 206, in statistic
ret['replies'] = [{
File "/app/mongo/user.py", line 207, in <listcomp>
'course': c.problem.course.name,
AttributeError: 'NoneType' object has no attribute 'course'
``` | priority | get when get student statistics replies 跟 liked 都應該只看留言,也就是 c depth exception on course course statistic traceback most recent call last file usr local lib site packages flask app py line in wsgi app response self full dispatch request file usr local lib site packages flask app py line in full dispatch request rv self handle user exception e file usr local lib site packages flask app py line in handle user exception reraise exc type exc value tb file usr local lib site packages flask compat py line in reraise raise value file usr local lib site packages flask app py line in full dispatch request rv self dispatch request file usr local lib site packages flask app py line in dispatch request return self view functions req view args file app model utils request py line in real wrapper return inner wrapper args ks file app mongo utils py line in wrapper return func args ks file app model utils request py line in inner wrapper return func args ks file app model course py line in statistic s u statistic file app mongo user py line in statistic ret file app mongo user py line in course c problem course name attributeerror nonetype object has no attribute course | 1 |
152,031 | 5,831,858,929 | IssuesEvent | 2017-05-08 20:23:50 | mreishman/Log-Hog | https://api.github.com/repos/mreishman/Log-Hog | closed | Show file permissions next to files | enhancement Priority - 2 - High | Show file permissions next to files in settings main page. | 1.0 | Show file permissions next to files - Show file permissions next to files in settings main page. | priority | show file permissions next to files show file permissions next to files in settings main page | 1 |
111,202 | 4,466,808,466 | IssuesEvent | 2016-08-25 00:37:33 | AtlasOfLivingAustralia/spatial-portal | https://api.github.com/repos/AtlasOfLivingAustralia/spatial-portal | closed | Add new Add to Map | Area option - Contextual layer classes | priority-high | Fiddling with the Phylogenetic diversity tool, it would make more sense (as we did with merge areas), to add new areas defined by "Contextual layer classes" rather than having this specific option limited to Tools | Phylogenetic diversity? This option would tap the 'layers table' data to limit choices to appropriate contextual layers.
The phylo tool would then present this 'area' as with any previously defined (or on the fly) area. | 1.0 | Add new Add to Map | Area option - Contextual layer classes - Fiddling with the Phylogenetic diversity tool, it would make more sense (as we did with merge areas), to add new areas defined by "Contextual layer classes" rather than having this specific option limited to Tools | Phylogenetic diversity? This option would tap the 'layers table' data to limit choices to appropriate contextual layers.
The phylo tool would then present this 'area' as with any previously defined (or on the fly) area. | priority | add new add to map area option contextual layer classes fiddling with the phylogenetic diversity tool it would make more sense as we did with merge areas to add new areas defined by contextual layer classes rather than having this specific option limited to tools phylogenetic diversity this option would tap the layers table data to limit choices to appropriate contextual layers the phylo tool would then present this area as with any previously defined or on the fly area | 1 |
270,485 | 8,461,037,231 | IssuesEvent | 2018-10-22 20:35:20 | Angry-Pixel/The-Betweenlands | https://api.github.com/repos/Angry-Pixel/The-Betweenlands | closed | Lurker Fix | 1.12 Bug High Priority | The waters of the BL are supposed to be quite hostile, but with lurkers constantly killing all the Anglers, it becomes fairly easy to navigate them. Perhaps Lurkers shouldn't *always* attack Anglers. Maybe a 50/50 chance or less. Only when they are hungry ;)
Also, Lurkers seem to have a hard time hitting the player, even when their head is literally in the player's legs. And perhaps they should hit a little bit harder as well? Not sure about that, but atleast they should be better at hitting. | 1.0 | Lurker Fix - The waters of the BL are supposed to be quite hostile, but with lurkers constantly killing all the Anglers, it becomes fairly easy to navigate them. Perhaps Lurkers shouldn't *always* attack Anglers. Maybe a 50/50 chance or less. Only when they are hungry ;)
Also, Lurkers seem to have a hard time hitting the player, even when their head is literally in the player's legs. And perhaps they should hit a little bit harder as well? Not sure about that, but atleast they should be better at hitting. | priority | lurker fix the waters of the bl are supposed to be quite hostile but with lurkers constantly killing all the anglers it becomes fairly easy to navigate them perhaps lurkers shouldn t always attack anglers maybe a chance or less only when they are hungry also lurkers seem to have a hard time hitting the player even when their head is literally in the player s legs and perhaps they should hit a little bit harder as well not sure about that but atleast they should be better at hitting | 1 |
822,160 | 30,855,650,373 | IssuesEvent | 2023-08-02 20:21:41 | janus-idp/backstage-showcase | https://api.github.com/repos/janus-idp/backstage-showcase | closed | Add Permission framework | priority/high kind/feature | ### Acceptance criteria
What are you trying to do, need? What are the acceptance criteria?
We need to use the [Permission framework](https://backstage.io/docs/permissions/overview) included in Backstage.
This issue is to investigate what can be done with this feature and what are the limitations.
Here are some example scenario:
- How a user can view only their components from the software catalog?
- How only admin can access plugins? ie In the announcement plugin or branding plugin, only admins should be able to update the content
- How to use that data from the Keycloak plugin? Do we have everything needed for authorization?
| 1.0 | Add Permission framework - ### Acceptance criteria
What are you trying to do, need? What are the acceptance criteria?
We need to use the [Permission framework](https://backstage.io/docs/permissions/overview) included in Backstage.
This issue is to investigate what can be done with this feature and what are the limitations.
Here are some example scenario:
- How a user can view only their components from the software catalog?
- How only admin can access plugins? ie In the announcement plugin or branding plugin, only admins should be able to update the content
- How to use that data from the Keycloak plugin? Do we have everything needed for authorization?
| priority | add permission framework acceptance criteria what are you trying to do need what are the acceptance criteria we need to use the included in backstage this issue is to investigate what can be done with this feature and what are the limitations here are some example scenario how a user can view only their components from the software catalog how only admin can access plugins ie in the announcement plugin or branding plugin only admins should be able to update the content how to use that data from the keycloak plugin do we have everything needed for authorization | 1 |
94,612 | 3,929,803,801 | IssuesEvent | 2016-04-25 02:53:49 | phetsims/circuit-construction-kit-basics | https://api.github.com/repos/phetsims/circuit-construction-kit-basics | closed | Black box "build" mode internal component doesn't connect to external component from "investigate" | priority:2-high | Black box "build" mode internal component doesn't connect to external component from "investigate"

| 1.0 | Black box "build" mode internal component doesn't connect to external component from "investigate" - Black box "build" mode internal component doesn't connect to external component from "investigate"

| priority | black box build mode internal component doesn t connect to external component from investigate black box build mode internal component doesn t connect to external component from investigate | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.