Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
393,140 | 11,610,793,681 | IssuesEvent | 2020-02-26 04:20:10 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | "Enable MaxUserLimit For SCIM" option not working in IS 5.10.0 Alpha | Priority/Highest Severity/Critical Type/Bug | In the secondary user store configurations, we have an option to limit the results returned from the user core ("Maximum User List Length" and "Maximum Role List Length").
However, since in SCIM Group operations we need the complete group to do manipulations such as member addition and removal, we have provided another option in secondary user stores to avoid user listing limiting using "Enable MaxUserLimit For SCIM" property.
But in the 5.10.0 alpha pack, this option doesn't seem to work.
To reproduce,
1. Add a secondary user store. Set the "Maximum User List Length" to 10. Keep the "Enable MaxUserLimit For SCIM" option unticked.
2. Add a Group to the secondary user store using SCIM
3. Add 11 members to the group.
4. Get the Group using SCIM /Groups API
5. You will only get 10 members in the response.
The expected behaviour is that the management console should list only 10 members for the role. But the SCIM API should list all 11 members. | 1.0 | "Enable MaxUserLimit For SCIM" option not working in IS 5.10.0 Alpha - In the secondary user store configurations, we have an option to limit the results returned from the user core ("Maximum User List Length" and "Maximum Role List Length").
However, since in SCIM Group operations we need the complete group to do manipulations such as member addition and removal, we have provided another option in secondary user stores to avoid user listing limiting using "Enable MaxUserLimit For SCIM" property.
But in the 5.10.0 alpha pack, this option doesn't seem to work.
To reproduce,
1. Add a secondary user store. Set the "Maximum User List Length" to 10. Keep the "Enable MaxUserLimit For SCIM" option unticked.
2. Add a Group to the secondary user store using SCIM
3. Add 11 members to the group.
4. Get the Group using SCIM /Groups API
5. You will only get 10 members in the response.
The expected behaviour is that the management console should list only 10 members for the role. But the SCIM API should list all 11 members. | priority | enable maxuserlimit for scim option not working in is alpha in the secondary user store configurations we have an option to limit the results returned from the user core maximum user list length and maximum role list length however since in scim group operations we need the complete group to do manipulations such as member addition and removal we have provided another option in secondary user stores to avoid user listing limiting using enable maxuserlimit for scim property but in the alpha pack this option doesn t seem to work to reproduce add a secondary user store set the maximum user list length to keep the enable maxuserlimit for scim option unticked add a group to the secondary user store using scim add members to the group get the group using scim groups api you will only get members in the response the expected behaviour is that the management console should list only members for the role but the scim api should list all members | 1 |
473,915 | 13,649,310,257 | IssuesEvent | 2020-09-26 13:51:18 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Google Analytics is not firing when GTM is enabled. | NEXT UPDATE Urgent [Priority: HIGH] bug | Google Analytics is not firing when GTM is enabled.
https://secure.helpscout.net/conversation/1254114299/148430?folderId=3953286 | 1.0 | Google Analytics is not firing when GTM is enabled. - Google Analytics is not firing when GTM is enabled.
https://secure.helpscout.net/conversation/1254114299/148430?folderId=3953286 | priority | google analytics is not firing when gtm is enabled google analytics is not firing when gtm is enabled | 1 |
514,236 | 14,936,067,571 | IssuesEvent | 2021-01-25 12:52:59 | bounswe/bounswe2020group3 | https://api.github.com/repos/bounswe/bounswe2020group3 | closed | [Front-End] [Bug] Deleting comments does not work. | Frontend Priority: High Status: In Progress Type: Bug | * **Project: FRONTEND**
* **This is a: BUG REPORT**
* **Description of the issue**
Deleting comments doesn't work because the api call is implemented incorrectly.
* **For feature requests: Expected functionality of the requested feature**
Change the request so that deleting comments will work.
* **Deadline for resolution:**
25.01.2020 | 1.0 | [Front-End] [Bug] Deleting comments does not work. - * **Project: FRONTEND**
* **This is a: BUG REPORT**
* **Description of the issue**
Deleting comments doesn't work because the api call is implemented incorrectly.
* **For feature requests: Expected functionality of the requested feature**
Change the request so that deleting comments will work.
* **Deadline for resolution:**
25.01.2020 | priority | deleting comments does not work project frontend this is a bug report description of the issue deleting comments doesn t work because the api call is implemented incorrectly for feature requests expected functionality of the requested feature change the request so that deleting comments will work deadline for resolution | 1 |
658,968 | 21,913,766,728 | IssuesEvent | 2022-05-21 13:32:09 | SELab-2/OSOC-1 | https://api.github.com/repos/SELab-2/OSOC-1 | closed | Realtime | enhancement question high priority | At the moment our application is not real time. This issue proposes a couple of possible solutions.
> Het doel van dit project is om een selectietool te ontwikkelen waarin OSOC medewerkers in **real time** met elkaar kunnen samenwerken om kandidaten te selecteren en aan teams toe te wijzen.
To be clear, this means that only certain parts of the application need to be real time. Only the students and projects pages will have to support real time, pages like the users page and the editions page don't need this. When a OSOC user changes something to a project or student, this change should be visible after a short delay to all other OSOC users on the same page (without them having to refresh of course). What follows are a couple of options to achieve this:
The simplest solution seems to be polling. Polling in our case would simply mean calling the backend periodically to check if any changes were made, and updating the view if this is the case. For example, this could look something like this:
```
(Alice just tabbed to the students page. Assume the backend is polled every two seconds).
12:00:00 Alice - GET /api/testEdition/students (Initialize the page)
12:00:02 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
12:00:04 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
12:00:05 Bob - POST /api/testEdition/students/testStudentId/status (Bob updates the status of a student)
12:00:06 Alice - GET /api/testEdition/students (Any changes? Yes, update the student with testStudentId)
12:00:08 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
...
```
Pros:
- Does not require changing the backend at all
- Comparatively simple to implement
Cons:
- Wastes a lot of resources, therefore it does not scale well
- Essentially the equivalent of hitting the refresh button every X seconds
A more sophisticated solution is [WebSocket](https://en.wikipedia.org/wiki/WebSocket). WebSocket is different from HTTP, as it allows for two-way (full-duplex) communication over a single TCP connection, where HTTP only allows for one-way communication. In other words, this would allow the backend to tell the frontend when a change was made by sending an event. The example from above could then look like this:
```
(Alice just tabbed to the students page).
12:00:00 Alice - GET /api/testEdition/students (Initialize the page)
12:00:05 Bob - Sends a WebSocket message to the server to update the status of a student
12:00:05 Server - Broadcasts that the student with testStudentId was modified
12:00:05 Alice - Receives the broadcasted message from the server and updates the student
```
Pros:
- Scales well, as there is no wasted traffic
- WebSockets are supported by Spring (see [here](https://spring.io/guides/gs/messaging-stomp-websocket/#initial) for example)
Cons:
- Would require changes to the backend
- Comparatively difficult to implement
- Swagger/OpenAPI does not support Websockets (see [here](https://stackoverflow.com/questions/38186483/describe-websocket-api-via-swagger) or [here](https://stackoverflow.com/questions/33146339/in-swagger-is-it-possible-to-create-apis-for-websockets)), so we'll have to find another way to document this part of the API.
A last solution would be to use [Server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). The difference between SSE and WebSockets is explained [here](https://stackoverflow.com/questions/5195452/websockets-vs-server-sent-events-eventsource). I did not look further into this, as it seems that SSE's only allow communication from the server to the client, but we could also go for this hacky solution:
>Chat is perfectly doable with SSE – you can use regular POST to send messages to the server.
Personally, I think going for WebSockets/SSE's could be [premature optimization](https://stackify.com/premature-optimization-evil/), and that we'd be better off trying polling first. I'd like to hear your thoughts on this though. | 1.0 | Realtime - At the moment our application is not real time. This issue proposes a couple of possible solutions.
> Het doel van dit project is om een selectietool te ontwikkelen waarin OSOC medewerkers in **real time** met elkaar kunnen samenwerken om kandidaten te selecteren en aan teams toe te wijzen.
To be clear, this means that only certain parts of the application need to be real time. Only the students and projects pages will have to support real time, pages like the users page and the editions page don't need this. When a OSOC user changes something to a project or student, this change should be visible after a short delay to all other OSOC users on the same page (without them having to refresh of course). What follows are a couple of options to achieve this:
The simplest solution seems to be polling. Polling in our case would simply mean calling the backend periodically to check if any changes were made, and updating the view if this is the case. For example, this could look something like this:
```
(Alice just tabbed to the students page. Assume the backend is polled every two seconds).
12:00:00 Alice - GET /api/testEdition/students (Initialize the page)
12:00:02 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
12:00:04 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
12:00:05 Bob - POST /api/testEdition/students/testStudentId/status (Bob updates the status of a student)
12:00:06 Alice - GET /api/testEdition/students (Any changes? Yes, update the student with testStudentId)
12:00:08 Alice - GET /api/testEdition/students (Any changes? No, the data is discarded)
...
```
Pros:
- Does not require changing the backend at all
- Comparatively simple to implement
Cons:
- Wastes a lot of resources, therefore it does not scale well
- Essentially the equivalent of hitting the refresh button every X seconds
A more sophisticated solution is [WebSocket](https://en.wikipedia.org/wiki/WebSocket). WebSocket is different from HTTP, as it allows for two-way (full-duplex) communication over a single TCP connection, where HTTP only allows for one-way communication. In other words, this would allow the backend to tell the frontend when a change was made by sending an event. The example from above could then look like this:
```
(Alice just tabbed to the students page).
12:00:00 Alice - GET /api/testEdition/students (Initialize the page)
12:00:05 Bob - Sends a WebSocket message to the server to update the status of a student
12:00:05 Server - Broadcasts that the student with testStudentId was modified
12:00:05 Alice - Receives the broadcasted message from the server and updates the student
```
Pros:
- Scales well, as there is no wasted traffic
- WebSockets are supported by Spring (see [here](https://spring.io/guides/gs/messaging-stomp-websocket/#initial) for example)
Cons:
- Would require changes to the backend
- Comparatively difficult to implement
- Swagger/OpenAPI does not support Websockets (see [here](https://stackoverflow.com/questions/38186483/describe-websocket-api-via-swagger) or [here](https://stackoverflow.com/questions/33146339/in-swagger-is-it-possible-to-create-apis-for-websockets)), so we'll have to find another way to document this part of the API.
A last solution would be to use [Server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events). The difference between SSE and WebSockets is explained [here](https://stackoverflow.com/questions/5195452/websockets-vs-server-sent-events-eventsource). I did not look further into this, as it seems that SSE's only allow communication from the server to the client, but we could also go for this hacky solution:
>Chat is perfectly doable with SSE – you can use regular POST to send messages to the server.
Personally, I think going for WebSockets/SSE's could be [premature optimization](https://stackify.com/premature-optimization-evil/), and that we'd be better off trying polling first. I'd like to hear your thoughts on this though. | priority | realtime at the moment our application is not real time this issue proposes a couple of possible solutions het doel van dit project is om een selectietool te ontwikkelen waarin osoc medewerkers in real time met elkaar kunnen samenwerken om kandidaten te selecteren en aan teams toe te wijzen to be clear this means that only certain parts of the application need to be real time only the students and projects pages will have to support real time pages like the users page and the editions page don t need this when a osoc user changes something to a project or student this change should be visible after a short delay to all other osoc users on the same page without them having to refresh of course what follows are a couple of options to achieve this the simplest solution seems to be polling polling in our case would simply mean calling the backend periodically to check if any changes were made and updating the view if this is the case for example this could look something like this alice just tabbed to the students page assume the backend is polled every two seconds alice get api testedition students initialize the page alice get api testedition students any changes no the data is discarded alice get api testedition students any changes no the data is discarded bob post api testedition students teststudentid status bob updates the status of a student alice get api testedition students any changes yes update the student with teststudentid alice get api testedition students any changes no the data is discarded pros does not require changing the backend at all comparatively simple to implement cons wastes a lot of resources therefore it does not scale well essentially the equivalent of hitting the refresh button every x seconds a more sophisticated solution is websocket is different from http as it allows for two way full duplex communication over a single tcp connection where http only allows for one way communication in other words this would allow the backend to tell the frontend when a change was made by sending an event the example from above could then look like this alice just tabbed to the students page alice get api testedition students initialize the page bob sends a websocket message to the server to update the status of a student server broadcasts that the student with teststudentid was modified alice receives the broadcasted message from the server and updates the student pros scales well as there is no wasted traffic websockets are supported by spring see for example cons would require changes to the backend comparatively difficult to implement swagger openapi does not support websockets see or so we ll have to find another way to document this part of the api a last solution would be to use the difference between sse and websockets is explained i did not look further into this as it seems that sse s only allow communication from the server to the client but we could also go for this hacky solution chat is perfectly doable with sse – you can use regular post to send messages to the server personally i think going for websockets sse s could be and that we d be better off trying polling first i d like to hear your thoughts on this though | 1 |
166,779 | 6,311,324,846 | IssuesEvent | 2017-07-23 18:29:22 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | sideNav and topNav Component Should Dynamically render links | Priority: High Status: Available Type: Feature | # Feature
## Why is this feature being added?
If we add more routes to our website, they must be individually added to `home.js`, `sideNav.js`, and `topNav.js`. Do you hear that? Ray Hettinger just pounded his fist and we all automatically said, "There must be a better way!"
## What should your feature do?
If a route is added to `home.js`, the ideal implementation of this general feature means that the routes will also appear as links within both/either of the rendered `<SideNav>` or `<TopNav>` components. | 1.0 | sideNav and topNav Component Should Dynamically render links - # Feature
## Why is this feature being added?
If we add more routes to our website, they must be individually added to `home.js`, `sideNav.js`, and `topNav.js`. Do you hear that? Ray Hettinger just pounded his fist and we all automatically said, "There must be a better way!"
## What should your feature do?
If a route is added to `home.js`, the ideal implementation of this general feature means that the routes will also appear as links within both/either of the rendered `<SideNav>` or `<TopNav>` components. | priority | sidenav and topnav component should dynamically render links feature why is this feature being added if we add more routes to our website they must be individually added to home js sidenav js and topnav js do you hear that ray hettinger just pounded his fist and we all automatically said there must be a better way what should your feature do if a route is added to home js the ideal implementation of this general feature means that the routes will also appear as links within both either of the rendered or components | 1 |
202,391 | 7,047,517,738 | IssuesEvent | 2018-01-02 13:54:56 | MorpheusXAUT/slotlist-frontend | https://api.github.com/repos/MorpheusXAUT/slotlist-frontend | closed | Switching the mission calendar's month no longer triggers a mission list reload | bug priority/high upcoming | When fixing #110, I apparently broke the retrieval of missions for the calendar when the month is switched.
This should obviously still trigger a reload, even if missions have already been loaded.
-----
### Tasks
- [x] Make sure missions are loaded again after switching month | 1.0 | Switching the mission calendar's month no longer triggers a mission list reload - When fixing #110, I apparently broke the retrieval of missions for the calendar when the month is switched.
This should obviously still trigger a reload, even if missions have already been loaded.
-----
### Tasks
- [x] Make sure missions are loaded again after switching month | priority | switching the mission calendar s month no longer triggers a mission list reload when fixing i apparently broke the retrieval of missions for the calendar when the month is switched this should obviously still trigger a reload even if missions have already been loaded tasks make sure missions are loaded again after switching month | 1 |
492,101 | 14,176,969,322 | IssuesEvent | 2020-11-13 00:55:02 | MLH-Fellowship/MLPrep | https://api.github.com/repos/MLH-Fellowship/MLPrep | closed | Image Processing to separate objects | backend high priority | Input: Image of various food items
Output: Separated images of different food objects
We need to separate different objects within the image in case there are multiple food items since the food classifier can only identify one item at a time | 1.0 | Image Processing to separate objects - Input: Image of various food items
Output: Separated images of different food objects
We need to separate different objects within the image in case there are multiple food items since the food classifier can only identify one item at a time | priority | image processing to separate objects input image of various food items output separated images of different food objects we need to separate different objects within the image in case there are multiple food items since the food classifier can only identify one item at a time | 1 |
796,937 | 28,132,832,647 | IssuesEvent | 2023-04-01 03:13:17 | AY2223S2-CS2103T-F12-3/tp | https://api.github.com/repos/AY2223S2-CS2103T-F12-3/tp | opened | Events created with `r/none` update timestamps erroneously | priority.High type.Bug severity.Medium | Steps to reproduce:
1. Create an event with `r/none`
a. `addevent d/Catchup with John s/2023-03-30 1600 e/2023-03-30 1800 r/none`
b. Note how the event is created at the current time instead
2. Use `addevent/editevent/delevent` on any other event
a. Note how the event created in step 1. has its timing updated to the current time
Other observations:
* Using `editevent` on the affected event fixes the erroneous timing
Mentioned in:
* #241
* #242
* #244 | 1.0 | Events created with `r/none` update timestamps erroneously - Steps to reproduce:
1. Create an event with `r/none`
a. `addevent d/Catchup with John s/2023-03-30 1600 e/2023-03-30 1800 r/none`
b. Note how the event is created at the current time instead
2. Use `addevent/editevent/delevent` on any other event
a. Note how the event created in step 1. has its timing updated to the current time
Other observations:
* Using `editevent` on the affected event fixes the erroneous timing
Mentioned in:
* #241
* #242
* #244 | priority | events created with r none update timestamps erroneously steps to reproduce create an event with r none a addevent d catchup with john s e r none b note how the event is created at the current time instead use addevent editevent delevent on any other event a note how the event created in step has its timing updated to the current time other observations using editevent on the affected event fixes the erroneous timing mentioned in | 1 |
829,534 | 31,882,255,981 | IssuesEvent | 2023-09-16 14:15:56 | uli/dragonbasic | https://api.github.com/repos/uli/dragonbasic | closed | Setting a string to be empty causes MF to generate a segmentation fault. | Bug Severity:High Priority:Medium | Issue found using the latest version of Dragon Basic under Linux (Commit ID: d2ce042366068083a5fe3089873a22221fffbc26)
Setting a string to be empty (Ie. "") causes MF to generate a segmentation fault. This functionality is important since users may want to append values to a new string, but cannot do so from an empty one since MF does not allow empty strings to be defined. One example of this kind of use is padding a string with spaces to centre text with equal spaces on ether side of it. This bug is likely related to the issue "Checking the condition of a string to be empty causes MF to generate a segmentation fault." (https://github.com/uli/dragonbasic/issues/4)
Partial workaround: Figure out what the first character of your string is and initialise it with that value instead of (""). This works for tasks like padding strings, but does not help when you want to make a string appear uninitialised, or truly empty with no value. | 1.0 | Setting a string to be empty causes MF to generate a segmentation fault. - Issue found using the latest version of Dragon Basic under Linux (Commit ID: d2ce042366068083a5fe3089873a22221fffbc26)
Setting a string to be empty (Ie. "") causes MF to generate a segmentation fault. This functionality is important since users may want to append values to a new string, but cannot do so from an empty one since MF does not allow empty strings to be defined. One example of this kind of use is padding a string with spaces to centre text with equal spaces on ether side of it. This bug is likely related to the issue "Checking the condition of a string to be empty causes MF to generate a segmentation fault." (https://github.com/uli/dragonbasic/issues/4)
Partial workaround: Figure out what the first character of your string is and initialise it with that value instead of (""). This works for tasks like padding strings, but does not help when you want to make a string appear uninitialised, or truly empty with no value. | priority | setting a string to be empty causes mf to generate a segmentation fault issue found using the latest version of dragon basic under linux commit id setting a string to be empty ie causes mf to generate a segmentation fault this functionality is important since users may want to append values to a new string but cannot do so from an empty one since mf does not allow empty strings to be defined one example of this kind of use is padding a string with spaces to centre text with equal spaces on ether side of it this bug is likely related to the issue checking the condition of a string to be empty causes mf to generate a segmentation fault partial workaround figure out what the first character of your string is and initialise it with that value instead of this works for tasks like padding strings but does not help when you want to make a string appear uninitialised or truly empty with no value | 1 |
228,003 | 7,545,013,581 | IssuesEvent | 2018-04-17 20:15:47 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Crashing on load | High Priority Respond ASAP | From the support queue:
Hi, i recently purchased Eco two days ago from steam, downloaded fine and i'm not the one for multiplayer as i don't have the time so i went to start up a single player game, clicked on new and after about 30 seconds was in the game, played for a few hours then jumped off, the next day when i go to load up the world i get the error message failed to start webserver: exception has been thrown by the target of an invocation
I have posted in the discord chat to see if anyone could help, a couple of helpful suggestions were offered such as running the server.exe as admin ( even though when i first started playing i didn't do that ) another one was to change the ports from 3000 - 3001 to 4000 - 4001 which has still not fixed the issue. I have done clean installs and have tried uninstalling and installing multiple times and am out of ideas.
Version of game is 7.3.2
Operating system is windows 10
Processor - i7 8700k
Graphics card - gtx 1060 6 gb
Ram - 32gb ddr4
Have left the server on for about 25 minutes and tried joining and either got the error message and an infinite load screen or a " connection lost "
Have Verified the integrity, has come back ' all files successfully validated "
I don't have any firewalls enabled or have anti-virus
When i go to generate new, is where the error comes up, it starts off loading different pieces of it then i get a pop up saying " do you want to allow this app to make changes to your device? " " Network command shell "
When i click yes i get the error message and also get the same message when i click no followed by a infinite load screen
Same thing happens when i run the steam eco server
In the server search it wont load or show any servers, but it did when i first started playing before i started getting the error message
[output_log (6).txt](https://github.com/StrangeLoopGames/EcoIssues/files/1901005/output_log.6.txt)
| 1.0 | Crashing on load - From the support queue:
Hi, i recently purchased Eco two days ago from steam, downloaded fine and i'm not the one for multiplayer as i don't have the time so i went to start up a single player game, clicked on new and after about 30 seconds was in the game, played for a few hours then jumped off, the next day when i go to load up the world i get the error message failed to start webserver: exception has been thrown by the target of an invocation
I have posted in the discord chat to see if anyone could help, a couple of helpful suggestions were offered such as running the server.exe as admin ( even though when i first started playing i didn't do that ) another one was to change the ports from 3000 - 3001 to 4000 - 4001 which has still not fixed the issue. I have done clean installs and have tried uninstalling and installing multiple times and am out of ideas.
Version of game is 7.3.2
Operating system is windows 10
Processor - i7 8700k
Graphics card - gtx 1060 6 gb
Ram - 32gb ddr4
Have left the server on for about 25 minutes and tried joining and either got the error message and an infinite load screen or a " connection lost "
Have Verified the integrity, has come back ' all files successfully validated "
I don't have any firewalls enabled or have anti-virus
When i go to generate new, is where the error comes up, it starts off loading different pieces of it then i get a pop up saying " do you want to allow this app to make changes to your device? " " Network command shell "
When i click yes i get the error message and also get the same message when i click no followed by a infinite load screen
Same thing happens when i run the steam eco server
In the server search it wont load or show any servers, but it did when i first started playing before i started getting the error message
[output_log (6).txt](https://github.com/StrangeLoopGames/EcoIssues/files/1901005/output_log.6.txt)
| priority | crashing on load from the support queue hi i recently purchased eco two days ago from steam downloaded fine and i m not the one for multiplayer as i don t have the time so i went to start up a single player game clicked on new and after about seconds was in the game played for a few hours then jumped off the next day when i go to load up the world i get the error message failed to start webserver exception has been thrown by the target of an invocation i have posted in the discord chat to see if anyone could help a couple of helpful suggestions were offered such as running the server exe as admin even though when i first started playing i didn t do that another one was to change the ports from to which has still not fixed the issue i have done clean installs and have tried uninstalling and installing multiple times and am out of ideas version of game is operating system is windows processor graphics card gtx gb ram have left the server on for about minutes and tried joining and either got the error message and an infinite load screen or a connection lost have verified the integrity has come back all files successfully validated i don t have any firewalls enabled or have anti virus when i go to generate new is where the error comes up it starts off loading different pieces of it then i get a pop up saying do you want to allow this app to make changes to your device network command shell when i click yes i get the error message and also get the same message when i click no followed by a infinite load screen same thing happens when i run the steam eco server in the server search it wont load or show any servers but it did when i first started playing before i started getting the error message | 1 |
225,203 | 7,479,576,992 | IssuesEvent | 2018-04-04 14:58:06 | spacetelescope/cubeviz | https://api.github.com/repos/spacetelescope/cubeviz | opened | Activating contour maps breaks ROI click and drag selection | bug high-priority | Steps to reproduce:
1. Create an ROI in any viewer
2. Activate contour map in the same viewer
3. Attempt to click and drag the ROI in the same viewer. It will not work. Clicking and dragging the ROI in any other viewer should continue to work. Deactivating the contour display does not solve the problem.
| 1.0 | Activating contour maps breaks ROI click and drag selection - Steps to reproduce:
1. Create an ROI in any viewer
2. Activate contour map in the same viewer
3. Attempt to click and drag the ROI in the same viewer. It will not work. Clicking and dragging the ROI in any other viewer should continue to work. Deactivating the contour display does not solve the problem.
| priority | activating contour maps breaks roi click and drag selection steps to reproduce create an roi in any viewer activate contour map in the same viewer attempt to click and drag the roi in the same viewer it will not work clicking and dragging the roi in any other viewer should continue to work deactivating the contour display does not solve the problem | 1 |
418,166 | 12,194,307,847 | IssuesEvent | 2020-04-29 15:37:11 | CERT-Polska/mquery | https://api.github.com/repos/CERT-Polska/mquery | closed | Add pagination or lazy-load to results table | level:medium priority:high status:up for grabs zone:frontend | **Description**
When executing a query with many results, the interface will show all the matches in the results table. The table can be with tens of thousands of results for a big dataset, which will make the page slow to response. I suggest implementing pagination (the xhr requests are already fetching 50 at a time) or a [lazy-loading](https://en.wikipedia.org/wiki/Lazy_loading) | 1.0 | Add pagination or lazy-load to results table - **Description**
When executing a query with many results, the interface will show all the matches in the results table. The table can be with tens of thousands of results for a big dataset, which will make the page slow to response. I suggest implementing pagination (the xhr requests are already fetching 50 at a time) or a [lazy-loading](https://en.wikipedia.org/wiki/Lazy_loading) | priority | add pagination or lazy load to results table description when executing a query with many results the interface will show all the matches in the results table the table can be with tens of thousands of results for a big dataset which will make the page slow to response i suggest implementing pagination the xhr requests are already fetching at a time or a | 1 |
444,775 | 12,820,939,532 | IssuesEvent | 2020-07-06 07:04:06 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | opened | Add support for single jurisdiction selection on Jurisdiction Assignment Page | Priority: High | For certain plan intervention types e.g. FI and Dynamic-FI, you are only allowed to have one jurisdiction selected when creating the plan.
This jurisdiction must be a leaf-node in the jurisdiction tree.
Part of: https://github.com/onaio/reveal-frontend/issues/986 | 1.0 | Add support for single jurisdiction selection on Jurisdiction Assignment Page - For certain plan intervention types e.g. FI and Dynamic-FI, you are only allowed to have one jurisdiction selected when creating the plan.
This jurisdiction must be a leaf-node in the jurisdiction tree.
Part of: https://github.com/onaio/reveal-frontend/issues/986 | priority | add support for single jurisdiction selection on jurisdiction assignment page for certain plan intervention types e g fi and dynamic fi you are only allowed to have one jurisdiction selected when creating the plan this jurisdiction must be a leaf node in the jurisdiction tree part of | 1 |
388,227 | 11,484,863,800 | IssuesEvent | 2020-02-11 05:33:26 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Program daily usage | Docs: not needed Effort: small Module: dispensary Priority: high | ## Describe the bug
Program daily usage should be counted to be +1 day, -3 months
### To reproduce
Dispensary development bug
### Expected behaviour
Dispensary development bug
### Proposed Solution
Dispensary development bug
### Version and device info
Dispensary development bug
### Additional context
Dispensary development bug
| 1.0 | Program daily usage - ## Describe the bug
Program daily usage should be counted to be +1 day, -3 months
### To reproduce
Dispensary development bug
### Expected behaviour
Dispensary development bug
### Proposed Solution
Dispensary development bug
### Version and device info
Dispensary development bug
### Additional context
Dispensary development bug
| priority | program daily usage describe the bug program daily usage should be counted to be day months to reproduce dispensary development bug expected behaviour dispensary development bug proposed solution dispensary development bug version and device info dispensary development bug additional context dispensary development bug | 1 |
758,247 | 26,547,256,495 | IssuesEvent | 2023-01-20 02:08:49 | ksh1vn/DWCR | https://api.github.com/repos/ksh1vn/DWCR | closed | Aiven Jr's phrases after battle with Axel is TOTALLY broken | Flaws (high priority) | First, he says the phrase that is not in the subtitles. Then it turns out that it was divided in the Community Remaster. And then a phrase is generally played that was not used in the original game and accordingly i didn't pitch down this phrase! OMG. I hate myself. BRUH. | 1.0 | Aiven Jr's phrases after battle with Axel is TOTALLY broken - First, he says the phrase that is not in the subtitles. Then it turns out that it was divided in the Community Remaster. And then a phrase is generally played that was not used in the original game and accordingly i didn't pitch down this phrase! OMG. I hate myself. BRUH. | priority | aiven jr s phrases after battle with axel is totally broken first he says the phrase that is not in the subtitles then it turns out that it was divided in the community remaster and then a phrase is generally played that was not used in the original game and accordingly i didn t pitch down this phrase omg i hate myself bruh | 1 |
394,253 | 11,634,162,199 | IssuesEvent | 2020-02-28 09:50:15 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Make email templates available out of the box in the product. | Priority/Highest Type/Task | Shall we ship email templates required for default enabled features OOB
Ex: [emailOTP](
https://is.docs.wso2.com/en/5.9.0/learn/configuring-email-otp/ ), [TOTP](
https://is.docs.wso2.com/en/5.9.0/learn/configuring-totp/ ) , [resend recovery confirmations](https://is.docs.wso2.com/en/5.9.0/learn/resending-account-recovery-confirmation-emails/)
Need to check other places as well. | 1.0 | Make email templates available out of the box in the product. - Shall we ship email templates required for default enabled features OOB
Ex: [emailOTP](
https://is.docs.wso2.com/en/5.9.0/learn/configuring-email-otp/ ), [TOTP](
https://is.docs.wso2.com/en/5.9.0/learn/configuring-totp/ ) , [resend recovery confirmations](https://is.docs.wso2.com/en/5.9.0/learn/resending-account-recovery-confirmation-emails/)
Need to check other places as well. | priority | make email templates available out of the box in the product shall we ship email templates required for default enabled features oob ex need to check other places as well | 1 |
399,791 | 11,760,601,059 | IssuesEvent | 2020-03-13 19:53:47 | Swi005/Chessbot3 | https://api.github.com/repos/Swi005/Chessbot3 | closed | Forandringer i board default constructor | High Priority help wanted question | Er den nåværende metoden best?
Føler at den tidligere metoden var bedre ettersom du slipper den digre switch casen i PieceFactory. | 1.0 | Forandringer i board default constructor - Er den nåværende metoden best?
Føler at den tidligere metoden var bedre ettersom du slipper den digre switch casen i PieceFactory. | priority | forandringer i board default constructor er den nåværende metoden best føler at den tidligere metoden var bedre ettersom du slipper den digre switch casen i piecefactory | 1 |
60,355 | 3,125,600,854 | IssuesEvent | 2015-09-08 01:34:15 | dotsdl/datreant | https://api.github.com/repos/dotsdl/datreant | closed | Remove location keyword for Treant creation; use path instead + new keyword for forcing new | enhancement priority high | Related to [MDSynthesis #29](https://github.com/Becksteinlab/MDSynthesis/issues/29). Currently how the name string is parsed when creating a new Treant is fundamentally different than for when a Treant is being re-generated. We can unify these by treating it as a path at all times, creating a new Treant if one is not found at the given location, loading an existing one if the path contains a unique Treant (only one state file), or forcing the creation of a new one with the given path using a `new=True` keyword. | 1.0 | Remove location keyword for Treant creation; use path instead + new keyword for forcing new - Related to [MDSynthesis #29](https://github.com/Becksteinlab/MDSynthesis/issues/29). Currently how the name string is parsed when creating a new Treant is fundamentally different than for when a Treant is being re-generated. We can unify these by treating it as a path at all times, creating a new Treant if one is not found at the given location, loading an existing one if the path contains a unique Treant (only one state file), or forcing the creation of a new one with the given path using a `new=True` keyword. | priority | remove location keyword for treant creation use path instead new keyword for forcing new related to currently how the name string is parsed when creating a new treant is fundamentally different than for when a treant is being re generated we can unify these by treating it as a path at all times creating a new treant if one is not found at the given location loading an existing one if the path contains a unique treant only one state file or forcing the creation of a new one with the given path using a new true keyword | 1 |
544,924 | 15,931,595,236 | IssuesEvent | 2021-04-14 03:45:58 | windchime-yk/blog | https://api.github.com/repos/windchime-yk/blog | opened | Impreved article updated date | Priority: High Type: Feature | Currently, the update date and time of the article is updated manually.
I want to rewrite this to use GitHub's update history. | 1.0 | Impreved article updated date - Currently, the update date and time of the article is updated manually.
I want to rewrite this to use GitHub's update history. | priority | impreved article updated date currently the update date and time of the article is updated manually i want to rewrite this to use github s update history | 1 |
342,157 | 10,312,772,375 | IssuesEvent | 2019-08-29 20:43:27 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | [Importing Process] - Quality Ansure for CCAFS publications | Priority - High Type -Task | - [x] Get the list of deliverables IDs with missing or wrong information by MA
- [x] Update the deliverable information in MARLO
- [x] Validate again if the information by MA
| 1.0 | [Importing Process] - Quality Ansure for CCAFS publications - - [x] Get the list of deliverables IDs with missing or wrong information by MA
- [x] Update the deliverable information in MARLO
- [x] Validate again if the information by MA
| priority | quality ansure for ccafs publications get the list of deliverables ids with missing or wrong information by ma update the deliverable information in marlo validate again if the information by ma | 1 |
23,294 | 2,657,958,424 | IssuesEvent | 2015-03-18 13:00:26 | cs2103jan2015-t09-4j/main | https://api.github.com/repos/cs2103jan2015-t09-4j/main | opened | work and improve the feasibility of Parser class for String Parsing | priority.high Utility | To use String Array and string.split() instead of tokenizer
Allow the reuse of the commandString for future possible implementations | 1.0 | work and improve the feasibility of Parser class for String Parsing - To use String Array and string.split() instead of tokenizer
Allow the reuse of the commandString for future possible implementations | priority | work and improve the feasibility of parser class for string parsing to use string array and string split instead of tokenizer allow the reuse of the commandstring for future possible implementations | 1 |
311,552 | 9,535,182,305 | IssuesEvent | 2019-04-30 05:46:31 | mesg-foundation/core | https://api.github.com/repos/mesg-foundation/core | closed | Unable to deploy a local directory | bug high priority | The cli cannot deploy a service from a local directory.
`mesg-core service deploy` with no arguments inside the directory but doesn't from outside.
```
➜ ~ mesg-core service deploy ~/prog/MESG/tests/testservice
unable to evaluate symlinks in Dockerfile path: lstat /Users/antho/Dockerfile: no such file or directory
➜ ~ ls ~/prog/MESG/tests/testservice
Dockerfile mesg.yml package.json src tsconfig.json
```
The error is pointing to the current directory and should be the one in argument
| 1.0 | Unable to deploy a local directory - The cli cannot deploy a service from a local directory.
`mesg-core service deploy` with no arguments inside the directory but doesn't from outside.
```
➜ ~ mesg-core service deploy ~/prog/MESG/tests/testservice
unable to evaluate symlinks in Dockerfile path: lstat /Users/antho/Dockerfile: no such file or directory
➜ ~ ls ~/prog/MESG/tests/testservice
Dockerfile mesg.yml package.json src tsconfig.json
```
The error is pointing to the current directory and should be the one in argument
| priority | unable to deploy a local directory the cli cannot deploy a service from a local directory mesg core service deploy with no arguments inside the directory but doesn t from outside ➜ mesg core service deploy prog mesg tests testservice unable to evaluate symlinks in dockerfile path lstat users antho dockerfile no such file or directory ➜ ls prog mesg tests testservice dockerfile mesg yml package json src tsconfig json the error is pointing to the current directory and should be the one in argument | 1 |
243,391 | 7,857,307,681 | IssuesEvent | 2018-06-21 10:20:04 | opentargets/webapp | https://api.github.com/repos/opentargets/webapp | closed | Update Platform to bring in line with new privacy rules | Kind: Maintenance Priority: High Status: Fixed | To bring the Platform in line with the new EMBL-EBI privacy rules, can we please make the following changes?
1) Update the cookie banner to be in line with the EMBL-EBI cookie banner text and add the relevant links (to open in a new window).
> This website requires cookies and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our [Privacy Notice](https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets) and [Terms of Use](https://www.targetvalidation.org/terms-of-use).
2) On the [Terms of Use page](https://www.targetvalidation.org/terms-of-use), remove the text under the `Privacy` section and replace with the following text and link (to open in a new window).
> Please review our updated [Privacy Notice](https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets].
3) Remove the [Examples of Personal Data Collected by Open Targets page](https://www.targetvalidation.org/personal-data-collected-examples) as that information is now contained in the new privacy notice.
4) In the footer, put a Privacy Notice link and have it link out to https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets (link to open in a new window) | 1.0 | Update Platform to bring in line with new privacy rules - To bring the Platform in line with the new EMBL-EBI privacy rules, can we please make the following changes?
1) Update the cookie banner to be in line with the EMBL-EBI cookie banner text and add the relevant links (to open in a new window).
> This website requires cookies and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our [Privacy Notice](https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets) and [Terms of Use](https://www.targetvalidation.org/terms-of-use).
2) On the [Terms of Use page](https://www.targetvalidation.org/terms-of-use), remove the text under the `Privacy` section and replace with the following text and link (to open in a new window).
> Please review our updated [Privacy Notice](https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets].
3) Remove the [Examples of Personal Data Collected by Open Targets page](https://www.targetvalidation.org/personal-data-collected-examples) as that information is now contained in the new privacy notice.
4) In the footer, put a Privacy Notice link and have it link out to https://www.ebi.ac.uk/data-protection/privacy-notice/open-targets (link to open in a new window) | priority | update platform to bring in line with new privacy rules to bring the platform in line with the new embl ebi privacy rules can we please make the following changes update the cookie banner to be in line with the embl ebi cookie banner text and add the relevant links to open in a new window this website requires cookies and the limited processing of your personal data in order to function by using the site you are agreeing to this as outlined in our and on the remove the text under the privacy section and replace with the following text and link to open in a new window please review our updated remove the as that information is now contained in the new privacy notice in the footer put a privacy notice link and have it link out to link to open in a new window | 1 |
521,833 | 15,117,163,882 | IssuesEvent | 2021-02-09 08:02:59 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.2.0 beta staging-1902]Arrows not hitting targets as expected | Category: Gameplay Priority: High Regression Squad: Otter Status: Fixed | Arrows not hitting targets even though they are clearly impacting.
https://i.gyazo.com/5fbeefe01a4d7b7c0830f782aa3b05cc.mp4
This only occurs when you select a hotbar slot and drag your bow into it.
1. Select a hotbar slot
2. Drag or spawn a wooden/recurve/composite bow into active slot
3. Now you cannot hit any animals. | 1.0 | [0.9.2.0 beta staging-1902]Arrows not hitting targets as expected - Arrows not hitting targets even though they are clearly impacting.
https://i.gyazo.com/5fbeefe01a4d7b7c0830f782aa3b05cc.mp4
This only occurs when you select a hotbar slot and drag your bow into it.
1. Select a hotbar slot
2. Drag or spawn a wooden/recurve/composite bow into active slot
3. Now you cannot hit any animals. | priority | arrows not hitting targets as expected arrows not hitting targets even though they are clearly impacting this only occurs when you select a hotbar slot and drag your bow into it select a hotbar slot drag or spawn a wooden recurve composite bow into active slot now you cannot hit any animals | 1 |
334,968 | 10,147,439,553 | IssuesEvent | 2019-08-05 10:34:33 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Single design 2 in Swift broken in one specific use case | NEED FAST REVIEW [Priority: HIGH] bug | Single design 2 in Swift broken in one specific use case
https://take.ms/oUN3Z | 1.0 | Single design 2 in Swift broken in one specific use case - Single design 2 in Swift broken in one specific use case
https://take.ms/oUN3Z | priority | single design in swift broken in one specific use case single design in swift broken in one specific use case | 1 |
265,029 | 8,335,888,497 | IssuesEvent | 2018-09-28 05:15:43 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Ballerina service fails when request contains Expect header with 100 continue value | Component/HTTP Component/stdlib Priority/High Severity/Critical Type/Bug | **Description:**
Change the example [1], to call to mock back end service and proxy the response via ballerina service. then the request will fail with error [2]
[1] - https://ballerina.io/learn/by-example/http-100-continue.html
[2] - error: ballerina/runtime:CallFailedException, message: call failed
at xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
caused by error, message: illegal function invocation
at ballerina/http:respond(http_connection.bal:49)
**Steps to reproduce:**
**Affected Versions:**
0.98
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | Ballerina service fails when request contains Expect header with 100 continue value - **Description:**
Change the example [1], to call to mock back end service and proxy the response via ballerina service. then the request will fail with error [2]
[1] - https://ballerina.io/learn/by-example/http-100-continue.html
[2] - error: ballerina/runtime:CallFailedException, message: call failed
at xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
caused by error, message: illegal function invocation
at ballerina/http:respond(http_connection.bal:49)
**Steps to reproduce:**
**Affected Versions:**
0.98
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | ballerina service fails when request contains expect header with continue value description change the example to call to mock back end service and proxy the response via ballerina service then the request will fail with error error ballerina runtime callfailedexception message call failed at xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx caused by error message illegal function invocation at ballerina http respond http connection bal steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
622,266 | 19,619,316,455 | IssuesEvent | 2022-01-07 02:53:25 | bterone/sleek | https://api.github.com/repos/bterone/sleek | closed | Add Element transitions | feature high priority | ## Feature request
Primarily to be used with AOS,
Add element transitions (Fade up, down, etc.) that can be used in conjunction with AOS or by itself | 1.0 | Add Element transitions - ## Feature request
Primarily to be used with AOS,
Add element transitions (Fade up, down, etc.) that can be used in conjunction with AOS or by itself | priority | add element transitions feature request primarily to be used with aos add element transitions fade up down etc that can be used in conjunction with aos or by itself | 1 |
598,683 | 18,250,172,103 | IssuesEvent | 2021-10-02 04:09:44 | AY2122S1-CS2103T-W16-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-W16-2/tp | closed | Add membership fields for clients | type.Story priority.High | - [ ] Add new fields to model and any validation required
- [ ] Update related commands - `add client`, `delete client`
| 1.0 | Add membership fields for clients - - [ ] Add new fields to model and any validation required
- [ ] Update related commands - `add client`, `delete client`
| priority | add membership fields for clients add new fields to model and any validation required update related commands add client delete client | 1 |
662,829 | 22,154,268,416 | IssuesEvent | 2022-06-03 20:27:03 | bcgov/foi-flow | https://api.github.com/repos/bcgov/foi-flow | closed | Axis Id mandatory field validation missing for Unopened Requests | bug high priority | **Describe the bug in current situation**
The Axis Id mandatory field validation is missing for unopened requests and so even without entering Axis Id , the request could be saved.
**Link bug to the User Story**
**Impact of this bug**
Describe the impact, i.e. what the impact is, and number of users impacted.
**Chance of Occurring (high/medium/low/very low)**
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Login as IAO user.
2. Click on an unopened request
3. Scroll down and fill all the mandatory fields except the AXIS Id field.
4. See error
**Actual/ observed behaviour/ results**
The save button will be enabled and on clicking save button the request will be saved.
**Expected behaviour**
Save button should be disabled until a valid Axis Id is entered.
**Screenshots/ Visual Reference/ Source**
If applicable, add screenshots to help explain your problem. You an use screengrab.
| 1.0 | Axis Id mandatory field validation missing for Unopened Requests - **Describe the bug in current situation**
The Axis Id mandatory field validation is missing for unopened requests and so even without entering Axis Id , the request could be saved.
**Link bug to the User Story**
**Impact of this bug**
Describe the impact, i.e. what the impact is, and number of users impacted.
**Chance of Occurring (high/medium/low/very low)**
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Login as IAO user.
2. Click on an unopened request
3. Scroll down and fill all the mandatory fields except the AXIS Id field.
4. See error
**Actual/ observed behaviour/ results**
The save button will be enabled and on clicking save button the request will be saved.
**Expected behaviour**
Save button should be disabled until a valid Axis Id is entered.
**Screenshots/ Visual Reference/ Source**
If applicable, add screenshots to help explain your problem. You an use screengrab.
| priority | axis id mandatory field validation missing for unopened requests describe the bug in current situation the axis id mandatory field validation is missing for unopened requests and so even without entering axis id the request could be saved link bug to the user story impact of this bug describe the impact i e what the impact is and number of users impacted chance of occurring high medium low very low pre conditions which env any pre requesites or assumptions to execute steps steps to reproduce steps to reproduce the behavior login as iao user click on an unopened request scroll down and fill all the mandatory fields except the axis id field see error actual observed behaviour results the save button will be enabled and on clicking save button the request will be saved expected behaviour save button should be disabled until a valid axis id is entered screenshots visual reference source if applicable add screenshots to help explain your problem you an use screengrab | 1 |
304,247 | 9,329,362,275 | IssuesEvent | 2019-03-28 02:02:59 | milleniumbug/DidacticalEnigma | https://api.github.com/repos/milleniumbug/DidacticalEnigma | closed | Optimize resource usage | high-priority | Currently on startup, the program creates dictionary lookup files from the JMdict and JNedict files, which take ~1GB of disk space. This could be significantly reduced because there's a lot of redudant information stored there. Also, during creation of these files the application can take up to 2GB RAM. The creation of these files can take up to several minutes even on highly performant machines, and it could be even slower otherwise.
These all need to be fixed. | 1.0 | Optimize resource usage - Currently on startup, the program creates dictionary lookup files from the JMdict and JNedict files, which take ~1GB of disk space. This could be significantly reduced because there's a lot of redudant information stored there. Also, during creation of these files the application can take up to 2GB RAM. The creation of these files can take up to several minutes even on highly performant machines, and it could be even slower otherwise.
These all need to be fixed. | priority | optimize resource usage currently on startup the program creates dictionary lookup files from the jmdict and jnedict files which take of disk space this could be significantly reduced because there s a lot of redudant information stored there also during creation of these files the application can take up to ram the creation of these files can take up to several minutes even on highly performant machines and it could be even slower otherwise these all need to be fixed | 1 |
506,172 | 14,659,674,940 | IssuesEvent | 2020-12-28 21:09:29 | newrelic/newrelic-client-go | https://api.github.com/repos/newrelic/newrelic-client-go | closed | Dashboards API: Delete via GraphQL | enhancement priority:high size:M | ### Feature Description
Delete a dashboard via the GraphQL API | 1.0 | Dashboards API: Delete via GraphQL - ### Feature Description
Delete a dashboard via the GraphQL API | priority | dashboards api delete via graphql feature description delete a dashboard via the graphql api | 1 |
248,484 | 7,931,754,539 | IssuesEvent | 2018-07-07 04:43:06 | Unibeautify/unibeautify-cli | https://api.github.com/repos/Unibeautify/unibeautify-cli | closed | requires a peer of unibeautify@>= x.x.x but none is installed | bug high-priority | Unfortunately, I still haven't managed to actually format a file using the cli :-/ It doesn't manage to find the formatters I installed, or I didn't manage to install them properly.
```
$ sudo npm install --global unibeautify-cli
/usr/local/bin/unibeautify -> /usr/local/lib/node_modules/unibeautify-cli/dist/cli.js
+ unibeautify-cli@0.2.1
added 3 packages from 3 contributors and updated 2 packages in 5.195s
```
```
$ sudo npm install --global @unibeautify/beautifier-prettydiff
npm WARN @unibeautify/beautifier-prettydiff@0.5.3 requires a peer of unibeautify@>= 0.9.1 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettydiff@0.5.3 requires a peer of prettydiff2@^2.2.7 but none is installed. You must install peer dependencies yourself.
+ @unibeautify/beautifier-prettydiff@0.5.3
updated 1 package in 3.976s
```
```
$ sudo npm install --global @unibeautify/beautifier-clang-format
npm WARN @unibeautify/beautifier-clang-format@0.2.0 requires a peer of unibeautify@>= 0.15.0 but none is installed. You must install peer dependencies yourself.
+ @unibeautify/beautifier-clang-format@0.2.0
updated 1 package in 1.48s
```
```
$ echo | unibeautify --language JavaScript
(node:18459) UnhandledPromiseRejectionWarning: Error: Beautifiers not found for Language: JavaScript
at Unibeautify.beautify (/usr/local/lib/node_modules/unibeautify-cli/node_modules/unibeautify/dist/src/beautifier.js:69:35)
at Socket.process.stdin.on (/usr/local/lib/node_modules/unibeautify-cli/dist/cli.js:43:25)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at Pipe.onread (net.js:635:20)
(node:18459) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:18459) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
```
$ echo | unibeautify --language C
(node:18541) UnhandledPromiseRejectionWarning: Error: Beautifiers not found for Language: C
at Unibeautify.beautify (/usr/local/lib/node_modules/unibeautify-cli/node_modules/unibeautify/dist/src/beautifier.js:69:35)
at Socket.process.stdin.on (/usr/local/lib/node_modules/unibeautify-cli/dist/cli.js:43:25)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at Pipe.onread (net.js:635:20)
(node:18541) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:18541) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
Running on MacOS.
Is there something special about those `@` packages? | 1.0 | requires a peer of unibeautify@>= x.x.x but none is installed - Unfortunately, I still haven't managed to actually format a file using the cli :-/ It doesn't manage to find the formatters I installed, or I didn't manage to install them properly.
```
$ sudo npm install --global unibeautify-cli
/usr/local/bin/unibeautify -> /usr/local/lib/node_modules/unibeautify-cli/dist/cli.js
+ unibeautify-cli@0.2.1
added 3 packages from 3 contributors and updated 2 packages in 5.195s
```
```
$ sudo npm install --global @unibeautify/beautifier-prettydiff
npm WARN @unibeautify/beautifier-prettydiff@0.5.3 requires a peer of unibeautify@>= 0.9.1 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettydiff@0.5.3 requires a peer of prettydiff2@^2.2.7 but none is installed. You must install peer dependencies yourself.
+ @unibeautify/beautifier-prettydiff@0.5.3
updated 1 package in 3.976s
```
```
$ sudo npm install --global @unibeautify/beautifier-clang-format
npm WARN @unibeautify/beautifier-clang-format@0.2.0 requires a peer of unibeautify@>= 0.15.0 but none is installed. You must install peer dependencies yourself.
+ @unibeautify/beautifier-clang-format@0.2.0
updated 1 package in 1.48s
```
```
$ echo | unibeautify --language JavaScript
(node:18459) UnhandledPromiseRejectionWarning: Error: Beautifiers not found for Language: JavaScript
at Unibeautify.beautify (/usr/local/lib/node_modules/unibeautify-cli/node_modules/unibeautify/dist/src/beautifier.js:69:35)
at Socket.process.stdin.on (/usr/local/lib/node_modules/unibeautify-cli/dist/cli.js:43:25)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at Pipe.onread (net.js:635:20)
(node:18459) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:18459) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
```
$ echo | unibeautify --language C
(node:18541) UnhandledPromiseRejectionWarning: Error: Beautifiers not found for Language: C
at Unibeautify.beautify (/usr/local/lib/node_modules/unibeautify-cli/node_modules/unibeautify/dist/src/beautifier.js:69:35)
at Socket.process.stdin.on (/usr/local/lib/node_modules/unibeautify-cli/dist/cli.js:43:25)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at Pipe.onread (net.js:635:20)
(node:18541) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:18541) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
Running on MacOS.
Is there something special about those `@` packages? | priority | requires a peer of unibeautify x x x but none is installed unfortunately i still haven t managed to actually format a file using the cli it doesn t manage to find the formatters i installed or i didn t manage to install them properly sudo npm install global unibeautify cli usr local bin unibeautify usr local lib node modules unibeautify cli dist cli js unibeautify cli added packages from contributors and updated packages in sudo npm install global unibeautify beautifier prettydiff npm warn unibeautify beautifier prettydiff requires a peer of unibeautify but none is installed you must install peer dependencies yourself npm warn unibeautify beautifier prettydiff requires a peer of but none is installed you must install peer dependencies yourself unibeautify beautifier prettydiff updated package in sudo npm install global unibeautify beautifier clang format npm warn unibeautify beautifier clang format requires a peer of unibeautify but none is installed you must install peer dependencies yourself unibeautify beautifier clang format updated package in echo unibeautify language javascript node unhandledpromiserejectionwarning error beautifiers not found for language javascript at unibeautify beautify usr local lib node modules unibeautify cli node modules unibeautify dist src beautifier js at socket process stdin on usr local lib node modules unibeautify cli dist cli js at socket emit events js at addchunk stream readable js at readableaddchunk stream readable js at socket readable push stream readable js at pipe onread net js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code echo unibeautify language c node unhandledpromiserejectionwarning error beautifiers not found for language c at unibeautify beautify usr local lib node modules unibeautify cli node modules unibeautify dist src beautifier js at socket process stdin on usr local lib node modules unibeautify cli dist cli js at socket emit events js at addchunk stream readable js at readableaddchunk stream readable js at socket readable push stream readable js at pipe onread net js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code running on macos is there something special about those packages | 1 |
791,872 | 27,880,757,846 | IssuesEvent | 2023-03-21 19:11:52 | openmsupply/open-msupply | https://api.github.com/repos/openmsupply/open-msupply | closed | Pull integration for permission table | programs Priority: High | - [ ] Move permissions from `service/src/sync/init_programs_data.rs` to the central server (once server supports it). Inject permissions via script.
- [x] pull permissions table
There are two sources for permissions:
1) Permissions received during first login of a user
2) Dynamic permissions table rows received through the new pull
Currently, during first user login the user permission table is fully cleared for the user (in `service/src/user_account.rs` `permission_repo.delete_by_user_id(&user.id)?;`)
- [x] make sure to only delete permission entries without `context`, i.e. only entries that come straight from the central server during login. | 1.0 | Pull integration for permission table - - [ ] Move permissions from `service/src/sync/init_programs_data.rs` to the central server (once server supports it). Inject permissions via script.
- [x] pull permissions table
There are two sources for permissions:
1) Permissions received during first login of a user
2) Dynamic permissions table rows received through the new pull
Currently, during first user login the user permission table is fully cleared for the user (in `service/src/user_account.rs` `permission_repo.delete_by_user_id(&user.id)?;`)
- [x] make sure to only delete permission entries without `context`, i.e. only entries that come straight from the central server during login. | priority | pull integration for permission table move permissions from service src sync init programs data rs to the central server once server supports it inject permissions via script pull permissions table there are two sources for permissions permissions received during first login of a user dynamic permissions table rows received through the new pull currently during first user login the user permission table is fully cleared for the user in service src user account rs permission repo delete by user id user id make sure to only delete permission entries without context i e only entries that come straight from the central server during login | 1 |
350,094 | 10,478,441,994 | IssuesEvent | 2019-09-24 00:03:25 | BCcampus/edehr | https://api.github.com/repos/BCcampus/edehr | closed | MAR page not working | Priority - High ~Bug | After adding a medication record it is expected that the MAR page would allow the user to enter MAR records. But the MAR page doesn't do this. Perhaps, like last time, the fragility of the way MARs are setup in code based on a fieldset in the Inputs sheet is causing a problem
| 1.0 | MAR page not working - After adding a medication record it is expected that the MAR page would allow the user to enter MAR records. But the MAR page doesn't do this. Perhaps, like last time, the fragility of the way MARs are setup in code based on a fieldset in the Inputs sheet is causing a problem
| priority | mar page not working after adding a medication record it is expected that the mar page would allow the user to enter mar records but the mar page doesn t do this perhaps like last time the fragility of the way mars are setup in code based on a fieldset in the inputs sheet is causing a problem | 1 |
189,086 | 6,793,860,223 | IssuesEvent | 2017-11-01 09:36:43 | metasfresh/metasfresh-webui-frontend | https://api.github.com/repos/metasfresh/metasfresh-webui-frontend | closed | Wrong process call | branch:master priority:high type:bug | ### Is this a bug or feature request?
Bug
### Which are the steps to reproduce?
* open a bpartner, e.g. https://w101.metasfresh.com:8443/window/123/2156425
* from document references jump to handling units window (btw, u can reproduce it by jumping to any other window)
* in handling units window, just select a row and call first action (does not matter which one, just call one)



### What is the expected or desired behavior?
Don't provide the selectedTab because there is no selectedTab here. I think that one somehow remained in memory from previous window?!?
More, there is no point to provide selectedTab if not providing windowId and documentId. | 1.0 | Wrong process call - ### Is this a bug or feature request?
Bug
### Which are the steps to reproduce?
* open a bpartner, e.g. https://w101.metasfresh.com:8443/window/123/2156425
* from document references jump to handling units window (btw, u can reproduce it by jumping to any other window)
* in handling units window, just select a row and call first action (does not matter which one, just call one)



### What is the expected or desired behavior?
Don't provide the selectedTab because there is no selectedTab here. I think that one somehow remained in memory from previous window?!?
More, there is no point to provide selectedTab if not providing windowId and documentId. | priority | wrong process call is this a bug or feature request bug which are the steps to reproduce open a bpartner e g from document references jump to handling units window btw u can reproduce it by jumping to any other window in handling units window just select a row and call first action does not matter which one just call one what is the expected or desired behavior don t provide the selectedtab because there is no selectedtab here i think that one somehow remained in memory from previous window more there is no point to provide selectedtab if not providing windowid and documentid | 1 |
360,207 | 10,685,536,745 | IssuesEvent | 2019-10-22 12:53:06 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [8.3] Servercrash from Legislation | High Priority | We think Player didnt had enoug Money for the Craftingtax
Exception
Exception: InvalidOperationException
Message:Sequence contains no elements
Source:System.Core
System.InvalidOperationException: Sequence contains no elements
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source)
at Eco.Shared.Utils.StringExtensions.CommaList(IEnumerable`1 phrases, String noneText)
at Eco.Gameplay.LegislationSystem.LawLogic.LawLogicRoot.IsAllowed(IPlayerAction action)
at Eco.Gameplay.Legislation.CreateAtomicAction(IPlayerAction action)
at System.Linq.Enumerable.WhereSelectArrayIterator`2.MoveNext()
at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)
at Eco.Gameplay.Stats.PlayerActionManager`1.CreateAtomicAction(T action)
at Eco.Gameplay.Items.WorkOrder.CreateCraftOneAction()
at Eco.Gameplay.Items.WorkOrder.get_AvailableWork()
at Eco.Gameplay.Components.CraftingComponent.ProcessWorkOrders(Single dtime)
at Eco.Gameplay.Components.CraftingComponent.Tick()
at Eco.Shared.Utils.ListExtensions.ForEach[T](IList`1 list, Action`1 action)
at Eco.Gameplay.Objects.WorldObject.Tick()
at Eco.Shared.Utils.EnumerableExtensions.ForEach[T](IEnumerable`1 enumeration, Action`1 action)
at Eco.Gameplay.Objects.WorldObjectManager.Run()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
--END DUMP-- | 1.0 | [8.3] Servercrash from Legislation - We think Player didnt had enoug Money for the Craftingtax
Exception
Exception: InvalidOperationException
Message:Sequence contains no elements
Source:System.Core
System.InvalidOperationException: Sequence contains no elements
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source)
at Eco.Shared.Utils.StringExtensions.CommaList(IEnumerable`1 phrases, String noneText)
at Eco.Gameplay.LegislationSystem.LawLogic.LawLogicRoot.IsAllowed(IPlayerAction action)
at Eco.Gameplay.Legislation.CreateAtomicAction(IPlayerAction action)
at System.Linq.Enumerable.WhereSelectArrayIterator`2.MoveNext()
at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)
at Eco.Gameplay.Stats.PlayerActionManager`1.CreateAtomicAction(T action)
at Eco.Gameplay.Items.WorkOrder.CreateCraftOneAction()
at Eco.Gameplay.Items.WorkOrder.get_AvailableWork()
at Eco.Gameplay.Components.CraftingComponent.ProcessWorkOrders(Single dtime)
at Eco.Gameplay.Components.CraftingComponent.Tick()
at Eco.Shared.Utils.ListExtensions.ForEach[T](IList`1 list, Action`1 action)
at Eco.Gameplay.Objects.WorldObject.Tick()
at Eco.Shared.Utils.EnumerableExtensions.ForEach[T](IEnumerable`1 enumeration, Action`1 action)
at Eco.Gameplay.Objects.WorldObjectManager.Run()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
--END DUMP-- | priority | servercrash from legislation we think player didnt had enoug money for the craftingtax exception exception invalidoperationexception message sequence contains no elements source system core system invalidoperationexception sequence contains no elements at system linq enumerable first ienumerable source at eco shared utils stringextensions commalist ienumerable phrases string nonetext at eco gameplay legislationsystem lawlogic lawlogicroot isallowed iplayeraction action at eco gameplay legislation createatomicaction iplayeraction action at system linq enumerable whereselectarrayiterator movenext at system collections generic list insertrange index ienumerable collection at eco gameplay stats playeractionmanager createatomicaction t action at eco gameplay items workorder createcraftoneaction at eco gameplay items workorder get availablework at eco gameplay components craftingcomponent processworkorders single dtime at eco gameplay components craftingcomponent tick at eco shared utils listextensions foreach ilist list action action at eco gameplay objects worldobject tick at eco shared utils enumerableextensions foreach ienumerable enumeration action action at eco gameplay objects worldobjectmanager run at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state at system threading threadhelper threadstart end dump | 1 |
654,249 | 21,645,158,476 | IssuesEvent | 2022-05-06 00:18:45 | CityOfDetroit/bloom | https://api.github.com/repos/CityOfDetroit/bloom | reopened | Detroit - Privacy Policy | high priority | **Notes**
* Remove all current content from the Privacy Policy Page and replace with text in yellow in row 7
* https://docs.google.com/spreadsheets/d/1KmIs93OKQIDsKyay90MLWmItLVBCyAytJJ8s5p-_Zm8/edit#gid=1955530074 | 1.0 | Detroit - Privacy Policy - **Notes**
* Remove all current content from the Privacy Policy Page and replace with text in yellow in row 7
* https://docs.google.com/spreadsheets/d/1KmIs93OKQIDsKyay90MLWmItLVBCyAytJJ8s5p-_Zm8/edit#gid=1955530074 | priority | detroit privacy policy notes remove all current content from the privacy policy page and replace with text in yellow in row | 1 |
736,069 | 25,456,613,217 | IssuesEvent | 2022-11-24 14:41:18 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | Tracing needed in search service | Category:Enhancement Priority:p2-high | ## Is your feature request related to a problem? Please describe
The search servie doesn't send traces
## Describe the solution you'd like
Like other ocis services, add instrumentation
## Describe alternatives you've considered
none
## Additional context
Needed for better debuggability | 1.0 | Tracing needed in search service - ## Is your feature request related to a problem? Please describe
The search servie doesn't send traces
## Describe the solution you'd like
Like other ocis services, add instrumentation
## Describe alternatives you've considered
none
## Additional context
Needed for better debuggability | priority | tracing needed in search service is your feature request related to a problem please describe the search servie doesn t send traces describe the solution you d like like other ocis services add instrumentation describe alternatives you ve considered none additional context needed for better debuggability | 1 |
720,573 | 24,797,438,915 | IssuesEvent | 2022-10-24 18:32:56 | vertica/spark-connector | https://api.github.com/repos/vertica/spark-connector | closed | Support table truncate when writing in overwrite mode | enhancement High Priority | ## Is your feature request related to a problem? Please describe.
We have a table where all the rows need to be re-written after a Spark batch job. The table has existing permissions that should be preserved. However, the table permissions are lost when the rows are written in *overwrite* mode:
```python
(sdf.write.format('com.vertica.spark.datasource.VerticaSource')
.mode('overwrite').options(**options).save())
```
## Describe the solution you'd like
In *overwrite* mode, there should be an option to truncate the target table instead of dropping it before re-write.
The standard Spark JDBC connector allows one to set `truncate` option to solve this ([link](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html)).
## Describe alternatives you've considered
Workarounds exist, but they are not very practical:
- Execute truncate statement through JDBC using some other library and write the rows using the connector in *append* mode
- Pass the privileges through `target_table_sql` parameter
- Using some external script/procedure to re-set the privileges
## Additional context
What the solution could look like after implementation:
```python
options['truncate'] = True
(sdf.write.format('com.vertica.spark.datasource.VerticaSource')
.mode('overwrite').options(**options).save())
```
| 1.0 | Support table truncate when writing in overwrite mode - ## Is your feature request related to a problem? Please describe.
We have a table where all the rows need to be re-written after a Spark batch job. The table has existing permissions that should be preserved. However, the table permissions are lost when the rows are written in *overwrite* mode:
```python
(sdf.write.format('com.vertica.spark.datasource.VerticaSource')
.mode('overwrite').options(**options).save())
```
## Describe the solution you'd like
In *overwrite* mode, there should be an option to truncate the target table instead of dropping it before re-write.
The standard Spark JDBC connector allows one to set `truncate` option to solve this ([link](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html)).
## Describe alternatives you've considered
Workarounds exist, but they are not very practical:
- Execute truncate statement through JDBC using some other library and write the rows using the connector in *append* mode
- Pass the privileges through `target_table_sql` parameter
- Using some external script/procedure to re-set the privileges
## Additional context
What the solution could look like after implementation:
```python
options['truncate'] = True
(sdf.write.format('com.vertica.spark.datasource.VerticaSource')
.mode('overwrite').options(**options).save())
```
| priority | support table truncate when writing in overwrite mode is your feature request related to a problem please describe we have a table where all the rows need to be re written after a spark batch job the table has existing permissions that should be preserved however the table permissions are lost when the rows are written in overwrite mode python sdf write format com vertica spark datasource verticasource mode overwrite options options save describe the solution you d like in overwrite mode there should be an option to truncate the target table instead of dropping it before re write the standard spark jdbc connector allows one to set truncate option to solve this describe alternatives you ve considered workarounds exist but they are not very practical execute truncate statement through jdbc using some other library and write the rows using the connector in append mode pass the privileges through target table sql parameter using some external script procedure to re set the privileges additional context what the solution could look like after implementation python options true sdf write format com vertica spark datasource verticasource mode overwrite options options save | 1 |
88,533 | 3,778,752,403 | IssuesEvent | 2016-03-18 02:49:20 | Vannevelj/VSDiagnostics | https://api.github.com/repos/Vannevelj/VSDiagnostics | closed | Reference ourselves | priority - high type - task | I want VSDiagnostics to be an example implementation of VSDiagnostics. Reference the NuGet package and fix all issues we find (I know there are several violations already).
Word of caution: I've noticed that sometimes after adding the VSDiagnostics NuGet package, the solution doesn't build anymore. If this occurs, go to the bottom of it because apparently this should not be happening. | 1.0 | Reference ourselves - I want VSDiagnostics to be an example implementation of VSDiagnostics. Reference the NuGet package and fix all issues we find (I know there are several violations already).
Word of caution: I've noticed that sometimes after adding the VSDiagnostics NuGet package, the solution doesn't build anymore. If this occurs, go to the bottom of it because apparently this should not be happening. | priority | reference ourselves i want vsdiagnostics to be an example implementation of vsdiagnostics reference the nuget package and fix all issues we find i know there are several violations already word of caution i ve noticed that sometimes after adding the vsdiagnostics nuget package the solution doesn t build anymore if this occurs go to the bottom of it because apparently this should not be happening | 1 |
720,780 | 24,806,062,849 | IssuesEvent | 2022-10-25 04:43:57 | AY2223S1-CS2113T-W11-1/tp | https://api.github.com/repos/AY2223S1-CS2113T-W11-1/tp | closed | Create feature to categorise expenses | type.Story priority.High | As a user, I can categorise my expenses so I can keep track of my spending across different areas. | 1.0 | Create feature to categorise expenses - As a user, I can categorise my expenses so I can keep track of my spending across different areas. | priority | create feature to categorise expenses as a user i can categorise my expenses so i can keep track of my spending across different areas | 1 |
98,332 | 4,020,128,749 | IssuesEvent | 2016-05-16 17:18:35 | berkmancenter/bookanook | https://api.github.com/repos/berkmancenter/bookanook | closed | Integration of Auto code review services | enhancement priority:high | Services like Codacy and Code Climate can be used to automate code review process. They check the code against certain rules for duplicity, security, complexity, etc. As the project is in the initial stage, this will help maintain certain coding standards and will be beneficial in long run. | 1.0 | Integration of Auto code review services - Services like Codacy and Code Climate can be used to automate code review process. They check the code against certain rules for duplicity, security, complexity, etc. As the project is in the initial stage, this will help maintain certain coding standards and will be beneficial in long run. | priority | integration of auto code review services services like codacy and code climate can be used to automate code review process they check the code against certain rules for duplicity security complexity etc as the project is in the initial stage this will help maintain certain coding standards and will be beneficial in long run | 1 |
248,260 | 7,928,607,698 | IssuesEvent | 2018-07-06 12:22:55 | jncc/topcat | https://api.github.com/repos/jncc/topcat | closed | Remove person contact from XML published to Data.gov.uk | high priority | The responsible organisation email field and the point of contact fields (on the Meta tab) in Topcat sometimes contain personal data and may also become out of date quickly.
Please make a change to Topcat so that the values in these fields are no longer populated in the relevant fields in the XML when the Topcat record is published to Data.gov.uk.
(For purposes of user enquiries DGU will display default values from JNCC's publisher details.) | 1.0 | Remove person contact from XML published to Data.gov.uk - The responsible organisation email field and the point of contact fields (on the Meta tab) in Topcat sometimes contain personal data and may also become out of date quickly.
Please make a change to Topcat so that the values in these fields are no longer populated in the relevant fields in the XML when the Topcat record is published to Data.gov.uk.
(For purposes of user enquiries DGU will display default values from JNCC's publisher details.) | priority | remove person contact from xml published to data gov uk the responsible organisation email field and the point of contact fields on the meta tab in topcat sometimes contain personal data and may also become out of date quickly please make a change to topcat so that the values in these fields are no longer populated in the relevant fields in the xml when the topcat record is published to data gov uk for purposes of user enquiries dgu will display default values from jncc s publisher details | 1 |
693,520 | 23,779,115,944 | IssuesEvent | 2022-09-02 01:24:33 | ballerina-platform/ballerina-dev-website | https://api.github.com/repos/ballerina-platform/ballerina-dev-website | closed | Fix BBE rendering issues | Priority/Highest Type/Task Area/BBEs | ## Description
Need to fix rendering issues in line breaks and HTML tags on BBEs.
## Related website/documentation area
> Add/Uncomment the relevant area label out of the following.
Area/BBEs
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe your task(s)
> A detailed description of the task.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
| 1.0 | Fix BBE rendering issues - ## Description
Need to fix rendering issues in line breaks and HTML tags on BBEs.
## Related website/documentation area
> Add/Uncomment the relevant area label out of the following.
Area/BBEs
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe your task(s)
> A detailed description of the task.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
| priority | fix bbe rendering issues description need to fix rendering issues in line breaks and html tags on bbes related website documentation area add uncomment the relevant area label out of the following area bbes describe your task s a detailed description of the task related issue s optional any related issues such as sub tasks and issues reported in other repositories e g component repositories similar problems etc suggested label s optional optional comma separated list of suggested labels non committers can’t assign labels to issues and thereby this will help issue creators who are not a committer to suggest possible labels suggested assignee s optional optional comma separated list of suggested team members who should attend the issue non committers can’t assign issues to assignees and thereby this will help issue creators who are not a committer to suggest possible assignees | 1 |
530,946 | 15,438,518,814 | IssuesEvent | 2021-03-07 20:40:09 | DPigeon/Money-Tree | https://api.github.com/repos/DPigeon/Money-Tree | closed | Display real values in owned stock (stock detail page) | High priority backend frontend | As a user I would like to see
Discussion:
- I think we should load the user's portfolio as they login. Any changes will be updated through the webhook
Note:
- We will have to implement webhooks to get the trades in real time (later)
**Acceptance Criteria**
- Load the user's owned stocks on login
- When the user goes on the stock detail page, it will display to them their actual stock data (If it exists)
Exists

Doesn't exist
 | 1.0 | Display real values in owned stock (stock detail page) - As a user I would like to see
Discussion:
- I think we should load the user's portfolio as they login. Any changes will be updated through the webhook
Note:
- We will have to implement webhooks to get the trades in real time (later)
**Acceptance Criteria**
- Load the user's owned stocks on login
- When the user goes on the stock detail page, it will display to them their actual stock data (If it exists)
Exists

Doesn't exist
 | priority | display real values in owned stock stock detail page as a user i would like to see discussion i think we should load the user s portfolio as they login any changes will be updated through the webhook note we will have to implement webhooks to get the trades in real time later acceptance criteria load the user s owned stocks on login when the user goes on the stock detail page it will display to them their actual stock data if it exists exists doesn t exist | 1 |
589,292 | 17,694,068,899 | IssuesEvent | 2021-08-24 13:32:13 | RoboJackets/apiary | https://api.github.com/repos/RoboJackets/apiary | closed | Handle 422s gracefully in dues flow | area / frontend priority / high type / bug | This is fixed-ish on the profile page but not in the dues flow, or at least not everywhere. | 1.0 | Handle 422s gracefully in dues flow - This is fixed-ish on the profile page but not in the dues flow, or at least not everywhere. | priority | handle gracefully in dues flow this is fixed ish on the profile page but not in the dues flow or at least not everywhere | 1 |
243,848 | 7,867,902,273 | IssuesEvent | 2018-06-23 14:48:36 | cdnjs/cdnjs | https://api.github.com/repos/cdnjs/cdnjs | closed | [Request] Add Vexflow | :label: Library Request :rotating_light: High Priority in progress | **Library name:** Vexflow
**Git repository url:** https://github.com/0xfe/vexflow
**npm package name or url** (if there is one): https://www.npmjs.com/package/vexflow
**License (List them all if it's multiple):** MIT
**Official homepage:** http://www.vexflow.com/ | 1.0 | [Request] Add Vexflow - **Library name:** Vexflow
**Git repository url:** https://github.com/0xfe/vexflow
**npm package name or url** (if there is one): https://www.npmjs.com/package/vexflow
**License (List them all if it's multiple):** MIT
**Official homepage:** http://www.vexflow.com/ | priority | add vexflow library name vexflow git repository url npm package name or url if there is one license list them all if it s multiple mit official homepage | 1 |
368,705 | 10,883,469,790 | IssuesEvent | 2019-11-18 05:02:37 | Zwilcox96/conference-connections | https://api.github.com/repos/Zwilcox96/conference-connections | closed | Home Page | High Priority enhancement front end user story | As a **admin** I would like to **see the home page** so that I can **access the app**
#### AC:
- [x] A webpage loads
- [x] The webpages contains some basic text and has some CSS styling
#### DOD:
- [x] All acceptance criteria are met
- [x] The feature has been tested for functionality
- [x] The feature has been tested for security | 1.0 | Home Page - As a **admin** I would like to **see the home page** so that I can **access the app**
#### AC:
- [x] A webpage loads
- [x] The webpages contains some basic text and has some CSS styling
#### DOD:
- [x] All acceptance criteria are met
- [x] The feature has been tested for functionality
- [x] The feature has been tested for security | priority | home page as a admin i would like to see the home page so that i can access the app ac a webpage loads the webpages contains some basic text and has some css styling dod all acceptance criteria are met the feature has been tested for functionality the feature has been tested for security | 1 |
415,921 | 12,137,029,866 | IssuesEvent | 2020-04-23 15:10:12 | blitz-js/blitz | https://api.github.com/repos/blitz-js/blitz | closed | Fix awkward pause in logging during blitz new | bug cli priority:high | **What is the problem?**
There's a long pause after the files are created, but before "Installing dependencies" is printed. This is because we are fetching the latest dependency versions.
Instead of an awkward pause, we should show some status update. Move the "installing dependencies" log above this, then add a spinner or something.
**Steps to Reproduce:**
1. Install global `blitz@0.5.0-canary.6`
2. Run `blitz new`
3. Notice a long pause after the files are created, but before "Installing dependencies" is printed
**Versions:**
- Blitz: 0.5.0-canary.6
| 1.0 | Fix awkward pause in logging during blitz new - **What is the problem?**
There's a long pause after the files are created, but before "Installing dependencies" is printed. This is because we are fetching the latest dependency versions.
Instead of an awkward pause, we should show some status update. Move the "installing dependencies" log above this, then add a spinner or something.
**Steps to Reproduce:**
1. Install global `blitz@0.5.0-canary.6`
2. Run `blitz new`
3. Notice a long pause after the files are created, but before "Installing dependencies" is printed
**Versions:**
- Blitz: 0.5.0-canary.6
| priority | fix awkward pause in logging during blitz new what is the problem there s a long pause after the files are created but before installing dependencies is printed this is because we are fetching the latest dependency versions instead of an awkward pause we should show some status update move the installing dependencies log above this then add a spinner or something steps to reproduce install global blitz canary run blitz new notice a long pause after the files are created but before installing dependencies is printed versions blitz canary | 1 |
618,119 | 19,425,434,648 | IssuesEvent | 2021-12-21 04:25:21 | lokka30/Treasury | https://api.github.com/repos/lokka30/Treasury | closed | Account System Overhaul | enhancement priority: high approved | The account system as it stands is kind of too similar to Vault's, what I mean by this is why are there just player accounts and bank accounts? While I understand it's just a naming scheme, it gives off a false sense that bank accounts are solely that, accounts for banking systems.
This is why I believe there should be a different Account Structure. It should look something like:
Account
- PlayerAccount (for players)
- utilizes a UUID
- GenericAccount( for non-players)
- Utilizes a String identifier
While I understand that the likelihood is near impossible of getting a duplicate UUID if we use this system for plugins such as Towny and Factions, it still provides another headache, which is on the server operator's end, which is if they wish to modify one of the balances for a town or faction directly in the database, they won't be able to find it based on a UUID easily.
Also, we are again forcing something onto economy providers that should be up to them to handle appropriately. If they want to store a UUID for a town they should be able to, but if they don't we shouldn't force it onto them.
The last point to this is the fact that under Minecraft convention, we are essentially calling non-player accounts(bank accounts), a Living Entity by providing a UUID to them.
Another point, the account access system should be attached to the Account interface itself. This would allow for the flexibility of player accounts being able to be shared, but not forcing it. It also doesn't restrict it to solely generic accounts. | 1.0 | Account System Overhaul - The account system as it stands is kind of too similar to Vault's, what I mean by this is why are there just player accounts and bank accounts? While I understand it's just a naming scheme, it gives off a false sense that bank accounts are solely that, accounts for banking systems.
This is why I believe there should be a different Account Structure. It should look something like:
Account
- PlayerAccount (for players)
- utilizes a UUID
- GenericAccount( for non-players)
- Utilizes a String identifier
While I understand that the likelihood is near impossible of getting a duplicate UUID if we use this system for plugins such as Towny and Factions, it still provides another headache, which is on the server operator's end, which is if they wish to modify one of the balances for a town or faction directly in the database, they won't be able to find it based on a UUID easily.
Also, we are again forcing something onto economy providers that should be up to them to handle appropriately. If they want to store a UUID for a town they should be able to, but if they don't we shouldn't force it onto them.
The last point to this is the fact that under Minecraft convention, we are essentially calling non-player accounts(bank accounts), a Living Entity by providing a UUID to them.
Another point, the account access system should be attached to the Account interface itself. This would allow for the flexibility of player accounts being able to be shared, but not forcing it. It also doesn't restrict it to solely generic accounts. | priority | account system overhaul the account system as it stands is kind of too similar to vault s what i mean by this is why are there just player accounts and bank accounts while i understand it s just a naming scheme it gives off a false sense that bank accounts are solely that accounts for banking systems this is why i believe there should be a different account structure it should look something like account playeraccount for players utilizes a uuid genericaccount for non players utilizes a string identifier while i understand that the likelihood is near impossible of getting a duplicate uuid if we use this system for plugins such as towny and factions it still provides another headache which is on the server operator s end which is if they wish to modify one of the balances for a town or faction directly in the database they won t be able to find it based on a uuid easily also we are again forcing something onto economy providers that should be up to them to handle appropriately if they want to store a uuid for a town they should be able to but if they don t we shouldn t force it onto them the last point to this is the fact that under minecraft convention we are essentially calling non player accounts bank accounts a living entity by providing a uuid to them another point the account access system should be attached to the account interface itself this would allow for the flexibility of player accounts being able to be shared but not forcing it it also doesn t restrict it to solely generic accounts | 1 |
494,896 | 14,268,063,560 | IssuesEvent | 2020-11-20 21:40:49 | CICE-Consortium/CICE | https://api.github.com/repos/CICE-Consortium/CICE | closed | warning ice_init_column.F90 | Priority: High Type: Bug | compiler complains that sicen don't have a value when used in cicecore/shared/ice_init_column.F90 line 936 . It is set to a private value in the openmp loope (line 894). Are sicen and trcrn_bgc suppose to be declared as private? | 1.0 | warning ice_init_column.F90 - compiler complains that sicen don't have a value when used in cicecore/shared/ice_init_column.F90 line 936 . It is set to a private value in the openmp loope (line 894). Are sicen and trcrn_bgc suppose to be declared as private? | priority | warning ice init column compiler complains that sicen don t have a value when used in cicecore shared ice init column line it is set to a private value in the openmp loope line are sicen and trcrn bgc suppose to be declared as private | 1 |
469,842 | 13,526,620,243 | IssuesEvent | 2020-09-15 14:27:33 | Automattic/woocommerce-payments | https://api.github.com/repos/Automattic/woocommerce-payments | closed | Redact sensitive data before logging | Priority: High Size S [Status] Has PR | e.g. the entirety of payment intent objects should not be logged - https://stripe.com/docs/api/payment_intents/object#payment_intent_object-client_secret - perhaps we could filter the logged object to show REDACTED or something for such fields | 1.0 | Redact sensitive data before logging - e.g. the entirety of payment intent objects should not be logged - https://stripe.com/docs/api/payment_intents/object#payment_intent_object-client_secret - perhaps we could filter the logged object to show REDACTED or something for such fields | priority | redact sensitive data before logging e g the entirety of payment intent objects should not be logged perhaps we could filter the logged object to show redacted or something for such fields | 1 |
110,312 | 4,425,068,133 | IssuesEvent | 2016-08-16 14:29:04 | leeensminger/DelDOT-NPDES-Field-Tool | https://api.github.com/repos/leeensminger/DelDOT-NPDES-Field-Tool | opened | Greyed out fields are still required for pipe end inventory | bug - high priority | I created a pipe end and selected Is Outlet = No on the first page of the inventory form. Discharge Downstream Type and Ownership Downstream were greyed out, as they should be. However, when saving the structure, a notification popped up stating that all inventory fields need to be filled out. I then changed Is Outlet to YES and filled in Discharge Downstream Type and Ownership Downstream and the structure was able to be saved. When Is Outlet = No, the two greyed out fields should not be required.


| 1.0 | Greyed out fields are still required for pipe end inventory - I created a pipe end and selected Is Outlet = No on the first page of the inventory form. Discharge Downstream Type and Ownership Downstream were greyed out, as they should be. However, when saving the structure, a notification popped up stating that all inventory fields need to be filled out. I then changed Is Outlet to YES and filled in Discharge Downstream Type and Ownership Downstream and the structure was able to be saved. When Is Outlet = No, the two greyed out fields should not be required.


| priority | greyed out fields are still required for pipe end inventory i created a pipe end and selected is outlet no on the first page of the inventory form discharge downstream type and ownership downstream were greyed out as they should be however when saving the structure a notification popped up stating that all inventory fields need to be filled out i then changed is outlet to yes and filled in discharge downstream type and ownership downstream and the structure was able to be saved when is outlet no the two greyed out fields should not be required | 1 |
146,828 | 5,628,593,803 | IssuesEvent | 2017-04-05 07:02:38 | nus-mtp/steps-networking-module | https://api.github.com/repos/nus-mtp/steps-networking-module | closed | Profile routing on page refresh | bug high-priority | Refers #289
Criteria:
1. When user refreshes page, profile link should be accessible and doe not return a invalid url | 1.0 | Profile routing on page refresh - Refers #289
Criteria:
1. When user refreshes page, profile link should be accessible and doe not return a invalid url | priority | profile routing on page refresh refers criteria when user refreshes page profile link should be accessible and doe not return a invalid url | 1 |
185,724 | 6,727,072,060 | IssuesEvent | 2017-10-17 12:21:47 | unfoldingWord-dev/translationCore | https://api.github.com/repos/unfoldingWord-dev/translationCore | opened | Windows install fails if Git is not installed | Priority High | Installer warns that Git is required and prompts user to click Continue to install it, but then this message is displayed:
https://github.com/unfoldingWord-dev/translationCore/releases | 1.0 | Windows install fails if Git is not installed - Installer warns that Git is required and prompts user to click Continue to install it, but then this message is displayed:
https://github.com/unfoldingWord-dev/translationCore/releases | priority | windows install fails if git is not installed installer warns that git is required and prompts user to click continue to install it but then this message is displayed | 1 |
774,809 | 27,212,154,656 | IssuesEvent | 2023-02-20 17:27:38 | gitpod-io/gitpod | https://api.github.com/repos/gitpod-io/gitpod | closed | Epic: Get rid of OTS (One-Time Secret) | team: webapp team: workspace type: epic priority: high | ### Summary
The one-time secret (OTS) mechanism is used to deliver secrets to the workspace cluster. It's not location-aware which breaks workspace startup across regions.
### Context
The one-time secret (OTS) mechanism is used to deliver secrets to the workspace cluster. During workspace startup, `server` will create up to three OTS:
- one for the SCM token
- one for the Gitpod token
- potentially one for environment variables
A one-time secret is stored in the database and identified by a UUID. Using this UUID it can be downloaded once, after which it's removed from the database. When the OTS is created, `server` produces a URL from which the OTS can be downloaded. This URL is not location-specific, but uses the load balancer (i.e. gitpod.io/...).
Because the different regions use different databases which are synchronised using db-sync, and because the OTS URL is not region-aware, a workspace created in another region will race db-sync. This can lead to workspace startup failure.
### Value
Removing OTS will
- reduce failure modes (see https://github.com/gitpod-io/gitpod/issues/8096)
- enable cross-region prebuilds (see https://github.com/gitpod-io/gitpod/issues/6650)
- reduce complexity in webapp
### Acceptance Criteria
This work is complete when
- there's a secure way to keep secrets on the workspace side (Kubernetes secrets qualify here)
- the OTS mechanism is no longer in use for shipping secrets
- the OTS mechanism has been removed from the code-base
### Measurement
We are successful here when there's no loss of functionality, and no more need for the OTS mechansim.
# Tasks
- [x] Add "secret" support to ws-manager, where a `StartWorkspace` request can carry named secrets
- [x] Ship the SCM token as named secret and pass it to ws-daemon during `InitWorkspace`
- [x] Ship the Gitpod token as named secret and pass it as environment variable to `supervisor`
- [x] Ship the user's environment variables as named secrets and pass as environment variables to the workspace
- [x] https://github.com/gitpod-io/gitpod/issues/12554
- [x] https://github.com/gitpod-io/gitpod/issues/11318
- [x] https://github.com/gitpod-io/gitpod/issues/13490
- [x] https://github.com/gitpod-io/gitpod/pull/13484
- [x] https://github.com/gitpod-io/ops/issues/5608
- [x] #13632
- [x] #13633
- [x] #13634
| 1.0 | Epic: Get rid of OTS (One-Time Secret) - ### Summary
The one-time secret (OTS) mechanism is used to deliver secrets to the workspace cluster. It's not location-aware which breaks workspace startup across regions.
### Context
The one-time secret (OTS) mechanism is used to deliver secrets to the workspace cluster. During workspace startup, `server` will create up to three OTS:
- one for the SCM token
- one for the Gitpod token
- potentially one for environment variables
A one-time secret is stored in the database and identified by a UUID. Using this UUID it can be downloaded once, after which it's removed from the database. When the OTS is created, `server` produces a URL from which the OTS can be downloaded. This URL is not location-specific, but uses the load balancer (i.e. gitpod.io/...).
Because the different regions use different databases which are synchronised using db-sync, and because the OTS URL is not region-aware, a workspace created in another region will race db-sync. This can lead to workspace startup failure.
### Value
Removing OTS will
- reduce failure modes (see https://github.com/gitpod-io/gitpod/issues/8096)
- enable cross-region prebuilds (see https://github.com/gitpod-io/gitpod/issues/6650)
- reduce complexity in webapp
### Acceptance Criteria
This work is complete when
- there's a secure way to keep secrets on the workspace side (Kubernetes secrets qualify here)
- the OTS mechanism is no longer in use for shipping secrets
- the OTS mechanism has been removed from the code-base
### Measurement
We are successful here when there's no loss of functionality, and no more need for the OTS mechansim.
# Tasks
- [x] Add "secret" support to ws-manager, where a `StartWorkspace` request can carry named secrets
- [x] Ship the SCM token as named secret and pass it to ws-daemon during `InitWorkspace`
- [x] Ship the Gitpod token as named secret and pass it as environment variable to `supervisor`
- [x] Ship the user's environment variables as named secrets and pass as environment variables to the workspace
- [x] https://github.com/gitpod-io/gitpod/issues/12554
- [x] https://github.com/gitpod-io/gitpod/issues/11318
- [x] https://github.com/gitpod-io/gitpod/issues/13490
- [x] https://github.com/gitpod-io/gitpod/pull/13484
- [x] https://github.com/gitpod-io/ops/issues/5608
- [x] #13632
- [x] #13633
- [x] #13634
| priority | epic get rid of ots one time secret summary the one time secret ots mechanism is used to deliver secrets to the workspace cluster it s not location aware which breaks workspace startup across regions context the one time secret ots mechanism is used to deliver secrets to the workspace cluster during workspace startup server will create up to three ots one for the scm token one for the gitpod token potentially one for environment variables a one time secret is stored in the database and identified by a uuid using this uuid it can be downloaded once after which it s removed from the database when the ots is created server produces a url from which the ots can be downloaded this url is not location specific but uses the load balancer i e gitpod io because the different regions use different databases which are synchronised using db sync and because the ots url is not region aware a workspace created in another region will race db sync this can lead to workspace startup failure value removing ots will reduce failure modes see enable cross region prebuilds see reduce complexity in webapp acceptance criteria this work is complete when there s a secure way to keep secrets on the workspace side kubernetes secrets qualify here the ots mechanism is no longer in use for shipping secrets the ots mechanism has been removed from the code base measurement we are successful here when there s no loss of functionality and no more need for the ots mechansim tasks add secret support to ws manager where a startworkspace request can carry named secrets ship the scm token as named secret and pass it to ws daemon during initworkspace ship the gitpod token as named secret and pass it as environment variable to supervisor ship the user s environment variables as named secrets and pass as environment variables to the workspace | 1 |
677,427 | 23,161,420,150 | IssuesEvent | 2022-07-29 18:12:15 | bitsongofficial/sinfonia-ui | https://api.github.com/repos/bitsongofficial/sinfonia-ui | closed | Feedback on epoch time | enhancement High Priority Review | When the platform reaches the epoch time, the whole platform slows down in the requests.
For this reason, it could be helpful to show feedback to the user that shows that "things can be delayed".
We can think to add a tooltip near the epoch time countdown, or show a popup to every user on each page they're exactly at 00:00 of the epoch time.
@giannnni what do you think about it? | 1.0 | Feedback on epoch time - When the platform reaches the epoch time, the whole platform slows down in the requests.
For this reason, it could be helpful to show feedback to the user that shows that "things can be delayed".
We can think to add a tooltip near the epoch time countdown, or show a popup to every user on each page they're exactly at 00:00 of the epoch time.
@giannnni what do you think about it? | priority | feedback on epoch time when the platform reaches the epoch time the whole platform slows down in the requests for this reason it could be helpful to show feedback to the user that shows that things can be delayed we can think to add a tooltip near the epoch time countdown or show a popup to every user on each page they re exactly at of the epoch time giannnni what do you think about it | 1 |
337,159 | 10,211,336,584 | IssuesEvent | 2019-08-14 16:40:39 | NCAR/MET | https://api.github.com/repos/NCAR/MET | closed | tc_rmw UL/UR/LL/LR - regrid error - bad interpolation method | priority: high type: bug | tc_rmw UL/UR/LL/LR - regrid error - bad interpolation method
Also:
Log output doesn't specify what type of regrid method and width is used, unless it's a higher verbosity than 3 - i see this log info with other tools and it's useful in ensuring it ran with the intended interpolation method. | 1.0 | tc_rmw UL/UR/LL/LR - regrid error - bad interpolation method - tc_rmw UL/UR/LL/LR - regrid error - bad interpolation method
Also:
Log output doesn't specify what type of regrid method and width is used, unless it's a higher verbosity than 3 - i see this log info with other tools and it's useful in ensuring it ran with the intended interpolation method. | priority | tc rmw ul ur ll lr regrid error bad interpolation method tc rmw ul ur ll lr regrid error bad interpolation method also log output doesn t specify what type of regrid method and width is used unless it s a higher verbosity than i see this log info with other tools and it s useful in ensuring it ran with the intended interpolation method | 1 |
314,714 | 9,601,918,010 | IssuesEvent | 2019-05-10 13:24:58 | infor-design/enterprise | https://api.github.com/repos/infor-design/enterprise | closed | Locale: incorrect time patterns | [2] priority: high type: bug :bug: | According to locale definitions here https://github.com/moment/moment/tree/develop/locale
And I have also checked in C#, time pattern for e.g. sl-SI it should be using ':' and not '.'
Raising a question whether there's more mismatch in the locale definitions.
At least these have wrong time pattern
da-DK
pl-PL
pt-BR
sl-SI | 1.0 | Locale: incorrect time patterns - According to locale definitions here https://github.com/moment/moment/tree/develop/locale
And I have also checked in C#, time pattern for e.g. sl-SI it should be using ':' and not '.'
Raising a question whether there's more mismatch in the locale definitions.
At least these have wrong time pattern
da-DK
pl-PL
pt-BR
sl-SI | priority | locale incorrect time patterns according to locale definitions here and i have also checked in c time pattern for e g sl si it should be using and not raising a question whether there s more mismatch in the locale definitions at least these have wrong time pattern da dk pl pl pt br sl si | 1 |
814,017 | 30,483,166,946 | IssuesEvent | 2023-07-17 22:20:33 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | SolrDocuments with large full text fields load slowly | Bug Priority - High Ready for Development | ### Descriptive summary
Search results for works with large (h)OCR/extracted text load unacceptably slow. This is best seen when searching the OSU General Catalog collection. These works are dense articles with lots of text per page and hundreds of pages.
https://oregondigital.org/catalog?f%5Bnon_user_collections_ssim%5D%5B%5D=general-catalogs&locale=en&search_field=all_fields
These searches can take a minute or more to load. This is from fileset documents taking 1-5sec to load for each work returned.
This could probably be done by preventing the all text field from being returned unless we actually need it.
Alternatively, I'd be ok with just getting the search results to speed up. We could look at why solr docs are pulled for filesets on searches and see if there's a way to work around it.
### Expected behavior
Solr Documents for FileSets with large text content load consistently under 1sec.
### Related work
https://issues.apache.org/jira/browse/SOLR-3191
An issue for excluding a field from solr search results exists but it seems abandoned
### Accessibility Concerns
N/A
| 1.0 | SolrDocuments with large full text fields load slowly - ### Descriptive summary
Search results for works with large (h)OCR/extracted text load unacceptably slow. This is best seen when searching the OSU General Catalog collection. These works are dense articles with lots of text per page and hundreds of pages.
https://oregondigital.org/catalog?f%5Bnon_user_collections_ssim%5D%5B%5D=general-catalogs&locale=en&search_field=all_fields
These searches can take a minute or more to load. This is from fileset documents taking 1-5sec to load for each work returned.
This could probably be done by preventing the all text field from being returned unless we actually need it.
Alternatively, I'd be ok with just getting the search results to speed up. We could look at why solr docs are pulled for filesets on searches and see if there's a way to work around it.
### Expected behavior
Solr Documents for FileSets with large text content load consistently under 1sec.
### Related work
https://issues.apache.org/jira/browse/SOLR-3191
An issue for excluding a field from solr search results exists but it seems abandoned
### Accessibility Concerns
N/A
| priority | solrdocuments with large full text fields load slowly descriptive summary search results for works with large h ocr extracted text load unacceptably slow this is best seen when searching the osu general catalog collection these works are dense articles with lots of text per page and hundreds of pages these searches can take a minute or more to load this is from fileset documents taking to load for each work returned this could probably be done by preventing the all text field from being returned unless we actually need it alternatively i d be ok with just getting the search results to speed up we could look at why solr docs are pulled for filesets on searches and see if there s a way to work around it expected behavior solr documents for filesets with large text content load consistently under related work an issue for excluding a field from solr search results exists but it seems abandoned accessibility concerns n a | 1 |
34,738 | 2,787,140,633 | IssuesEvent | 2015-05-08 02:01:06 | roundware/roundware-server | https://api.github.com/repos/roundware/roundware-server | closed | global listen mode no longer working | bug high priority | When I try to listen to a project that does not have geo-listen enabled, it plays the speaker audio, but not any assets if I do not send a lat/lon parameters with ```request_stream``` or ```modify_stream```. If I do send lat/lon parameters, even though the project is flagged as not a geo-listen project, the assets within range are played, but no other assets are played. This behavior leads me to believe that there is no longer a distinction being made between geo-listen and global-listen projects i.e. all projects are being treated as geo-listen.
This could have changed when the whole refactoring of the code regarding the ```playlist``` occurred as this is the code that deals with this stuff. I haven't yet looked into earlier code to see how it was previously handled, but I believe the proper approach is that if the project is flagged as global-listen (ie geo_listen_enabled = false), when the ```playlist``` is being created, any lat/lon parameters should be ignored and the ```playlist``` should be populated with ALL recordings in the project per the given tag filters instead of none of the assets (not in range) as is happening now.
I will investigate the code more, but wanted to record the issue based on my initial tests. | 1.0 | global listen mode no longer working - When I try to listen to a project that does not have geo-listen enabled, it plays the speaker audio, but not any assets if I do not send a lat/lon parameters with ```request_stream``` or ```modify_stream```. If I do send lat/lon parameters, even though the project is flagged as not a geo-listen project, the assets within range are played, but no other assets are played. This behavior leads me to believe that there is no longer a distinction being made between geo-listen and global-listen projects i.e. all projects are being treated as geo-listen.
This could have changed when the whole refactoring of the code regarding the ```playlist``` occurred as this is the code that deals with this stuff. I haven't yet looked into earlier code to see how it was previously handled, but I believe the proper approach is that if the project is flagged as global-listen (ie geo_listen_enabled = false), when the ```playlist``` is being created, any lat/lon parameters should be ignored and the ```playlist``` should be populated with ALL recordings in the project per the given tag filters instead of none of the assets (not in range) as is happening now.
I will investigate the code more, but wanted to record the issue based on my initial tests. | priority | global listen mode no longer working when i try to listen to a project that does not have geo listen enabled it plays the speaker audio but not any assets if i do not send a lat lon parameters with request stream or modify stream if i do send lat lon parameters even though the project is flagged as not a geo listen project the assets within range are played but no other assets are played this behavior leads me to believe that there is no longer a distinction being made between geo listen and global listen projects i e all projects are being treated as geo listen this could have changed when the whole refactoring of the code regarding the playlist occurred as this is the code that deals with this stuff i haven t yet looked into earlier code to see how it was previously handled but i believe the proper approach is that if the project is flagged as global listen ie geo listen enabled false when the playlist is being created any lat lon parameters should be ignored and the playlist should be populated with all recordings in the project per the given tag filters instead of none of the assets not in range as is happening now i will investigate the code more but wanted to record the issue based on my initial tests | 1 |
405,742 | 11,881,972,416 | IssuesEvent | 2020-03-27 13:39:36 | Intro-to-SE-Spring-2020/Chirpr | https://api.github.com/repos/Intro-to-SE-Spring-2020/Chirpr | closed | [BUG] Registration not registering user | High Priority bug frontend | ### Details
- Registration is not working with redux.
- User cannot register and application will throw errors.
### Acceptance Criteria:
- User can register.
- Registration errors are handled and displayed to user.
### Story Points
Story Points: 2 | 1.0 | [BUG] Registration not registering user - ### Details
- Registration is not working with redux.
- User cannot register and application will throw errors.
### Acceptance Criteria:
- User can register.
- Registration errors are handled and displayed to user.
### Story Points
Story Points: 2 | priority | registration not registering user details registration is not working with redux user cannot register and application will throw errors acceptance criteria user can register registration errors are handled and displayed to user story points story points | 1 |
782,398 | 27,495,541,461 | IssuesEvent | 2023-03-05 04:36:18 | AY2223S2-CS2103T-T09-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-T09-2/tp | closed | Modify AboutUs and README | v1.1 type.Story priority.High | As a developer, I can read the documentation so that I can credit the original developers of the project. | 1.0 | Modify AboutUs and README - As a developer, I can read the documentation so that I can credit the original developers of the project. | priority | modify aboutus and readme as a developer i can read the documentation so that i can credit the original developers of the project | 1 |
628,911 | 20,017,937,510 | IssuesEvent | 2022-02-01 13:56:12 | GoldenSoftwareLtd/gedemin | https://api.github.com/repos/GoldenSoftwareLtd/gedemin | closed | Tgdc_frmInvSelectGoodRemains | Type-Enhancement GedeminExe Priority-High | Originally reported on Google Code with ID 2280
```
Есть форма просмотра остатков по товару из складского документа.
Хотелось бы иметь такую же, но которая могла бы работать без привязки к документу склада,
что вы думаете на счёт этого (Tgdc_frmInvSelectGoodRemains и подобные этого не могут)
```
Reported by `Alexander.GoldenSoft` on 2010-12-08 13:47:46
| 1.0 | Tgdc_frmInvSelectGoodRemains - Originally reported on Google Code with ID 2280
```
Есть форма просмотра остатков по товару из складского документа.
Хотелось бы иметь такую же, но которая могла бы работать без привязки к документу склада,
что вы думаете на счёт этого (Tgdc_frmInvSelectGoodRemains и подобные этого не могут)
```
Reported by `Alexander.GoldenSoft` on 2010-12-08 13:47:46
| priority | tgdc frminvselectgoodremains originally reported on google code with id есть форма просмотра остатков по товару из складского документа хотелось бы иметь такую же но которая могла бы работать без привязки к документу склада что вы думаете на счёт этого tgdc frminvselectgoodremains и подобные этого не могут reported by alexander goldensoft on | 1 |
343,870 | 10,337,765,253 | IssuesEvent | 2019-09-03 15:28:40 | pragdave/earmark | https://api.github.com/repos/pragdave/earmark | closed | Feature idea: Output manipulation | Priority: HIGH enhancement under development | Apologies if this is something earmark can already do, or there is a way around it but I couldn't find anything on it.
I'm woking on an application that is using earmark, and I want to be able to add links to a page that are anchors to the headers produced by earmark. As far as I can tell, as of right now I would basically need to produce the HTML, and then edit the result manually which is fine but seems a bit long winded.
Unless I'm mistaken, using the plugin system wouldn't really work, as there are multiple ways to define a header, either via the `#` syntax, or underlining the word with `=` symbols, and in the latter case, registering a plugin for `=` would simply result in a line
## Proposal
Introduce a new configuration option that works in a similar way to the plugin system, but instead of giving you the unprocessed content, gives you the already processed output that can be manipulated. These could be registered on a per tag basis, so I could in this instance register the processor for `h1` and `h2` tags for example.
Obviously this makes it sound a lot simpler than I imagine it is but you get the idea.
**Edit:**
I realise there is the IAL extension for adding HTML attributes, however this puts the responsibility of doing so on the author of the markdown. The project I'm working on needs to do this automatically without any input from the markdown being processed. | 1.0 | Feature idea: Output manipulation - Apologies if this is something earmark can already do, or there is a way around it but I couldn't find anything on it.
I'm woking on an application that is using earmark, and I want to be able to add links to a page that are anchors to the headers produced by earmark. As far as I can tell, as of right now I would basically need to produce the HTML, and then edit the result manually which is fine but seems a bit long winded.
Unless I'm mistaken, using the plugin system wouldn't really work, as there are multiple ways to define a header, either via the `#` syntax, or underlining the word with `=` symbols, and in the latter case, registering a plugin for `=` would simply result in a line
## Proposal
Introduce a new configuration option that works in a similar way to the plugin system, but instead of giving you the unprocessed content, gives you the already processed output that can be manipulated. These could be registered on a per tag basis, so I could in this instance register the processor for `h1` and `h2` tags for example.
Obviously this makes it sound a lot simpler than I imagine it is but you get the idea.
**Edit:**
I realise there is the IAL extension for adding HTML attributes, however this puts the responsibility of doing so on the author of the markdown. The project I'm working on needs to do this automatically without any input from the markdown being processed. | priority | feature idea output manipulation apologies if this is something earmark can already do or there is a way around it but i couldn t find anything on it i m woking on an application that is using earmark and i want to be able to add links to a page that are anchors to the headers produced by earmark as far as i can tell as of right now i would basically need to produce the html and then edit the result manually which is fine but seems a bit long winded unless i m mistaken using the plugin system wouldn t really work as there are multiple ways to define a header either via the syntax or underlining the word with symbols and in the latter case registering a plugin for would simply result in a line proposal introduce a new configuration option that works in a similar way to the plugin system but instead of giving you the unprocessed content gives you the already processed output that can be manipulated these could be registered on a per tag basis so i could in this instance register the processor for and tags for example obviously this makes it sound a lot simpler than i imagine it is but you get the idea edit i realise there is the ial extension for adding html attributes however this puts the responsibility of doing so on the author of the markdown the project i m working on needs to do this automatically without any input from the markdown being processed | 1 |
262,220 | 8,256,951,915 | IssuesEvent | 2018-09-13 02:02:14 | colindamelio/saraswati | https://api.github.com/repos/colindamelio/saraswati | opened | Copy Feedback (to be implemented) | priority-high | Please check when complete! And update with the PR (or commit) beside the line-item:
- [ ] Hero: "Apply" Could be on it's own line, the CTA would stand out better.
- [ ] Header: Feels tall; we could shrink it to a smaller "truncated" height on scroll
- [ ] "our goal is to ensure you have a strong foundation", of what?
- [ ] Why is "JavaScript" in all caps?
- [ ] "Providing you an authentic Balinese Experience.". Shouldn't this be "Providing you WITH an authentic Balinese Experience."
- [ ] "Without a doubt, Bali is beautiful. However, most people who visit the island miss an opportunity to discover authentic Balinese Culture." What is the goal of this copy? I know there is authentic cuisine here, but why comment negatively about other travellers?
- [ ] "Saraswati Retreats strives to educate our guests beyond coding their website ". "Coding their website" sounds sloppy.
- [ ] A lot of "APPLY NOW" + "SEE ALL ACTIVITIES" CTA groups; would you be better off fixing it to the bottom of the screen?
- [ ] We could Hyperlink a map to the accomodations, to provide more credibility.
- [ ] "Located in the charming village of Umalas, near Seminyak, Villa Malaathina – set among 5,000 square metres of immaculate tropical gardens and surrounded by traditional Balinese rice paddies – is home for the duration of our retreat." is gramatically incorrect.
- [ ] Under "What's Included" the Note section could use smaller text, as it is lesser prominence
- [ ] "Save big when you register early for upcoming retreats! See retreat dates for Early Bird cut-off." This is gramatically incorrect. Personalize it by injecting "Our" where possible; "Our Early Bird cut-off", "our upcoming retreats.
- [ ] Form textarea; great work on the max-width, should have a max-height too. I dislike how that scales the image... feels sloppy
- [ ] Find a better way to communicate it's FRANCES who is from Bali. The team is from Canada. | 1.0 | Copy Feedback (to be implemented) - Please check when complete! And update with the PR (or commit) beside the line-item:
- [ ] Hero: "Apply" Could be on it's own line, the CTA would stand out better.
- [ ] Header: Feels tall; we could shrink it to a smaller "truncated" height on scroll
- [ ] "our goal is to ensure you have a strong foundation", of what?
- [ ] Why is "JavaScript" in all caps?
- [ ] "Providing you an authentic Balinese Experience.". Shouldn't this be "Providing you WITH an authentic Balinese Experience."
- [ ] "Without a doubt, Bali is beautiful. However, most people who visit the island miss an opportunity to discover authentic Balinese Culture." What is the goal of this copy? I know there is authentic cuisine here, but why comment negatively about other travellers?
- [ ] "Saraswati Retreats strives to educate our guests beyond coding their website ". "Coding their website" sounds sloppy.
- [ ] A lot of "APPLY NOW" + "SEE ALL ACTIVITIES" CTA groups; would you be better off fixing it to the bottom of the screen?
- [ ] We could Hyperlink a map to the accomodations, to provide more credibility.
- [ ] "Located in the charming village of Umalas, near Seminyak, Villa Malaathina – set among 5,000 square metres of immaculate tropical gardens and surrounded by traditional Balinese rice paddies – is home for the duration of our retreat." is gramatically incorrect.
- [ ] Under "What's Included" the Note section could use smaller text, as it is lesser prominence
- [ ] "Save big when you register early for upcoming retreats! See retreat dates for Early Bird cut-off." This is gramatically incorrect. Personalize it by injecting "Our" where possible; "Our Early Bird cut-off", "our upcoming retreats.
- [ ] Form textarea; great work on the max-width, should have a max-height too. I dislike how that scales the image... feels sloppy
- [ ] Find a better way to communicate it's FRANCES who is from Bali. The team is from Canada. | priority | copy feedback to be implemented please check when complete and update with the pr or commit beside the line item hero apply could be on it s own line the cta would stand out better header feels tall we could shrink it to a smaller truncated height on scroll our goal is to ensure you have a strong foundation of what why is javascript in all caps providing you an authentic balinese experience shouldn t this be providing you with an authentic balinese experience without a doubt bali is beautiful however most people who visit the island miss an opportunity to discover authentic balinese culture what is the goal of this copy i know there is authentic cuisine here but why comment negatively about other travellers saraswati retreats strives to educate our guests beyond coding their website coding their website sounds sloppy a lot of apply now see all activities cta groups would you be better off fixing it to the bottom of the screen we could hyperlink a map to the accomodations to provide more credibility located in the charming village of umalas near seminyak villa malaathina – set among square metres of immaculate tropical gardens and surrounded by traditional balinese rice paddies – is home for the duration of our retreat is gramatically incorrect under what s included the note section could use smaller text as it is lesser prominence save big when you register early for upcoming retreats see retreat dates for early bird cut off this is gramatically incorrect personalize it by injecting our where possible our early bird cut off our upcoming retreats form textarea great work on the max width should have a max height too i dislike how that scales the image feels sloppy find a better way to communicate it s frances who is from bali the team is from canada | 1 |
444,614 | 12,815,016,710 | IssuesEvent | 2020-07-04 22:47:48 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Unable to use module built-in APIs when using ModuleList / Sequential / ModuleDict subclasses in TorchScript | high priority jit triage review | DPER3 would like to subclass PyTorch `ModuleList` / `Sequential` / `ModuleDict` and add framework-specific logic, however we found that we won't be able to access the modules' built-in APIs (such as `__len__` and `__contains__`) when scripting the container subclasses.
To reproduce:
```python
import torch
class DPER3ModuleInterface(torch.nn.Module):
def __init__(self):
super(DPER3ModuleInterface, self).__init__()
class DPER3ModuleList(DPER3ModuleInterface, torch.nn.ModuleList):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.ModuleList.__init__(self, modules)
class DPER3Sequential(DPER3ModuleInterface, torch.nn.Sequential):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.Sequential.__init__(self, modules)
class DPER3ModuleDict(DPER3ModuleInterface, torch.nn.ModuleDict):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.ModuleDict.__init__(self, modules)
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.submod = torch.nn.Linear(3, 4)
self.modulelist = DPER3ModuleList([self.submod])
self.sequential = DPER3Sequential(self.submod)
self.moduledict = DPER3ModuleDict({"submod": self.submod})
def forward(self, inputs):
# ============== DPER3ModuleList ==============
# Test `__getitem__()`
assert self.modulelist[0] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.ModuleList'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.modulelist) == 1
# Test `__iter__()`
for module in self.modulelist:
assert module is self.submod
# ============== DPER3Sequential ==============
# Test `__getitem__()`
assert self.sequential[0] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.DPER3Sequential'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.sequential) == 1
# Test `__iter__()`
for module in self.sequential:
assert module is self.submod
# ============== DPER3ModuleDict ==============
# Test `__getitem__()``
#
# PROBLEM: this throws:
# ```
# Only ModuleList, Sequential, and ModuleDict modules are subscriptable
# ```
assert self.moduledict["submod"] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.DPER3ModuleDict'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.moduledict) == 1
# Test `__iter__()`
for module in self.moduledict:
assert module is self.submod
# Test `__contains__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__contains__' of type '__torch__.DPER3ModuleDict'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert "submod" in self.moduledict
# Test `keys()`
for key in self.moduledict.keys():
assert key == "submod"
# Test `items()`
for item in self.moduledict.items():
assert item[0] == "submod"
assert item[1] is self.submod
# Test `values()`
for value in self.moduledict.values():
assert value is self.submod
return inputs
m = MyModule()
torch.jit.script(m)
```
Since we already support some of the built-in APIs (e.g. `__iter__`) on a container subclass, ideally we should expand the support to cover all built-in APIs.
cc. @wanchaol @suo
cc @ezyang @gchanan @zou3519 @suo @gmagogsfm | 1.0 | Unable to use module built-in APIs when using ModuleList / Sequential / ModuleDict subclasses in TorchScript - DPER3 would like to subclass PyTorch `ModuleList` / `Sequential` / `ModuleDict` and add framework-specific logic, however we found that we won't be able to access the modules' built-in APIs (such as `__len__` and `__contains__`) when scripting the container subclasses.
To reproduce:
```python
import torch
class DPER3ModuleInterface(torch.nn.Module):
def __init__(self):
super(DPER3ModuleInterface, self).__init__()
class DPER3ModuleList(DPER3ModuleInterface, torch.nn.ModuleList):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.ModuleList.__init__(self, modules)
class DPER3Sequential(DPER3ModuleInterface, torch.nn.Sequential):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.Sequential.__init__(self, modules)
class DPER3ModuleDict(DPER3ModuleInterface, torch.nn.ModuleDict):
def __init__(self, modules=None):
DPER3ModuleInterface.__init__(self)
torch.nn.ModuleDict.__init__(self, modules)
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.submod = torch.nn.Linear(3, 4)
self.modulelist = DPER3ModuleList([self.submod])
self.sequential = DPER3Sequential(self.submod)
self.moduledict = DPER3ModuleDict({"submod": self.submod})
def forward(self, inputs):
# ============== DPER3ModuleList ==============
# Test `__getitem__()`
assert self.modulelist[0] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.ModuleList'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.modulelist) == 1
# Test `__iter__()`
for module in self.modulelist:
assert module is self.submod
# ============== DPER3Sequential ==============
# Test `__getitem__()`
assert self.sequential[0] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.DPER3Sequential'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.sequential) == 1
# Test `__iter__()`
for module in self.sequential:
assert module is self.submod
# ============== DPER3ModuleDict ==============
# Test `__getitem__()``
#
# PROBLEM: this throws:
# ```
# Only ModuleList, Sequential, and ModuleDict modules are subscriptable
# ```
assert self.moduledict["submod"] is self.submod
# Test `__len__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__len__' of type '__torch__.DPER3ModuleDict'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert len(self.moduledict) == 1
# Test `__iter__()`
for module in self.moduledict:
assert module is self.submod
# Test `__contains__()`
#
# PROBLEM: this throws:
# ```
# Tried to access nonexistent attribute or method '__contains__' of type '__torch__.DPER3ModuleDict'.
# Did you forget to initialize an attribute in __init__()?
# ```
assert "submod" in self.moduledict
# Test `keys()`
for key in self.moduledict.keys():
assert key == "submod"
# Test `items()`
for item in self.moduledict.items():
assert item[0] == "submod"
assert item[1] is self.submod
# Test `values()`
for value in self.moduledict.values():
assert value is self.submod
return inputs
m = MyModule()
torch.jit.script(m)
```
Since we already support some of the built-in APIs (e.g. `__iter__`) on a container subclass, ideally we should expand the support to cover all built-in APIs.
cc. @wanchaol @suo
cc @ezyang @gchanan @zou3519 @suo @gmagogsfm | priority | unable to use module built in apis when using modulelist sequential moduledict subclasses in torchscript would like to subclass pytorch modulelist sequential moduledict and add framework specific logic however we found that we won t be able to access the modules built in apis such as len and contains when scripting the container subclasses to reproduce python import torch class torch nn module def init self super self init class torch nn modulelist def init self modules none init self torch nn modulelist init self modules class torch nn sequential def init self modules none init self torch nn sequential init self modules class torch nn moduledict def init self modules none init self torch nn moduledict init self modules class mymodule torch nn module def init self super mymodule self init self submod torch nn linear self modulelist self sequential self submod self moduledict submod self submod def forward self inputs test getitem assert self modulelist is self submod test len problem this throws tried to access nonexistent attribute or method len of type torch modulelist did you forget to initialize an attribute in init assert len self modulelist test iter for module in self modulelist assert module is self submod test getitem assert self sequential is self submod test len problem this throws tried to access nonexistent attribute or method len of type torch did you forget to initialize an attribute in init assert len self sequential test iter for module in self sequential assert module is self submod test getitem problem this throws only modulelist sequential and moduledict modules are subscriptable assert self moduledict is self submod test len problem this throws tried to access nonexistent attribute or method len of type torch did you forget to initialize an attribute in init assert len self moduledict test iter for module in self moduledict assert module is self submod test contains problem this throws tried to access nonexistent attribute or method contains of type torch did you forget to initialize an attribute in init assert submod in self moduledict test keys for key in self moduledict keys assert key submod test items for item in self moduledict items assert item submod assert item is self submod test values for value in self moduledict values assert value is self submod return inputs m mymodule torch jit script m since we already support some of the built in apis e g iter on a container subclass ideally we should expand the support to cover all built in apis cc wanchaol suo cc ezyang gchanan suo gmagogsfm | 1 |
234,897 | 7,727,766,227 | IssuesEvent | 2018-05-25 04:51:39 | test4gloirin/m | https://api.github.com/repos/test4gloirin/m | closed | 0008020:
link did not get an anchor in html mail | Felamimail bug high priority | **Reported by pschuele on 11 Mar 2013 11:06**
**Version:** Kristina (2013.03.1)
link did not get an anchor in html mail
**Additional information:** http://www.facebook.com/media/set/?set=a.164136103742229.1073741825.100004375207149&type=1&l=692e495b17
| 1.0 | 0008020:
link did not get an anchor in html mail - **Reported by pschuele on 11 Mar 2013 11:06**
**Version:** Kristina (2013.03.1)
link did not get an anchor in html mail
**Additional information:** http://www.facebook.com/media/set/?set=a.164136103742229.1073741825.100004375207149&type=1&l=692e495b17
| priority | link did not get an anchor in html mail reported by pschuele on mar version kristina link did not get an anchor in html mail additional information | 1 |
419,266 | 12,219,829,152 | IssuesEvent | 2020-05-01 22:56:02 | geopm/geopm | https://api.github.com/repos/geopm/geopm | closed | IMPI launcher always sets LD_PRELOAD | bug bug-exposure-high bug-priority-low bug-quality-low | Even when not requested with --geopm-preload, the GEOPM_PRELOAD environment variable is set in the command line from the launcher. | 1.0 | IMPI launcher always sets LD_PRELOAD - Even when not requested with --geopm-preload, the GEOPM_PRELOAD environment variable is set in the command line from the launcher. | priority | impi launcher always sets ld preload even when not requested with geopm preload the geopm preload environment variable is set in the command line from the launcher | 1 |
659,576 | 21,933,807,462 | IssuesEvent | 2022-05-23 12:10:33 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Getting the same suggestion twice for a module when typing it for the first time | Type/Bug Priority/High Team/LanguageServer Area/Completion Reason/EngineeringMistake | **Description:**
$subject. This is when referring to a module for the first time. i.e. No previously added import declaration for the module.

**Affected Versions:**
Noticed in the 2201.1.0 rc2 for the first time. Could have affected previous versions also. | 1.0 | Getting the same suggestion twice for a module when typing it for the first time - **Description:**
$subject. This is when referring to a module for the first time. i.e. No previously added import declaration for the module.

**Affected Versions:**
Noticed in the 2201.1.0 rc2 for the first time. Could have affected previous versions also. | priority | getting the same suggestion twice for a module when typing it for the first time description subject this is when referring to a module for the first time i e no previously added import declaration for the module affected versions noticed in the for the first time could have affected previous versions also | 1 |
438,879 | 12,663,072,738 | IssuesEvent | 2020-06-18 00:07:08 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Add Burmese/Myanmar table strings | Burma/Myanmar Docs: needed Effort: small Feature Priority: high | ## Is your feature request related to a problem? Please describe.
Add Burmese/Myanmar translations for table strings.
## Describe the solution you'd like
See above.
## Implementation
Add Burmese/Myanmar translations to `tableStrings` localisation.
## Describe alternatives you've considered
N/A.
## Additional context
See #2928 for main issue. | 1.0 | Add Burmese/Myanmar table strings - ## Is your feature request related to a problem? Please describe.
Add Burmese/Myanmar translations for table strings.
## Describe the solution you'd like
See above.
## Implementation
Add Burmese/Myanmar translations to `tableStrings` localisation.
## Describe alternatives you've considered
N/A.
## Additional context
See #2928 for main issue. | priority | add burmese myanmar table strings is your feature request related to a problem please describe add burmese myanmar translations for table strings describe the solution you d like see above implementation add burmese myanmar translations to tablestrings localisation describe alternatives you ve considered n a additional context see for main issue | 1 |
105,150 | 4,231,520,685 | IssuesEvent | 2016-07-04 16:24:12 | nprapps/rockymountain | https://api.github.com/repos/nprapps/rockymountain | closed | conclusion card text | 1. Priority: high Card: Conclusion | - section descriptions
- language for buttons
- descriptions for each MOZ
- Project credits | 1.0 | conclusion card text - - section descriptions
- language for buttons
- descriptions for each MOZ
- Project credits | priority | conclusion card text section descriptions language for buttons descriptions for each moz project credits | 1 |
371,376 | 10,965,721,770 | IssuesEvent | 2019-11-28 04:10:03 | trustwallet/blockatlas | https://api.github.com/repos/trustwallet/blockatlas | closed | [Pricing] Reported issues | Priority: High Task Size: M Type: Bug | ```
curl -X POST \
https://blockatlas.trustwalletapp.com/v1/market/ticker \
-H 'Accept: */*' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Cache-Control: no-cache' \
-H 'Connection: keep-alive' \
-H 'Content-Length: 178' \
-H 'Content-Type: application/json' \
-H 'Cookie: __cfduid=d349bc392e8b76d9815c176085f6b8ac21550746682' \
-H 'Host: blockatlas.trustwalletapp.com' \
-H 'Postman-Token: 71d802ba-0dac-4cbd-a5f4-87de054040ba,4c691b51-38d9-4958-95a7-31440d1ad085' \
-H 'User-Agent: PostmanRuntime/7.20.1' \
-H 'cache-control: no-cache' \
-d '{
"currency": "USD",
"assets": [
{
"coin": 714,
"type": "coin"
},
{
"coin": 714,
"type": "token",
"token_id": "BUSD-BD1"
}
]
}'
```
Response:
```
{
"currency": "USD",
"docs": [
{
"coin": 714,
"type": "token",
"price": {
"value": 0.000855578129663,
"change_24h": -72.3719
},
"last_update": "2019-11-27T22:47:12Z"
},
{
"coin": 714,
"token_id": "BUSD-BD1",
"type": "token",
"price": {
"value": 0.9977496937623513,
"change_24h": 3.46
},
"last_update": "2019-11-27T22:58:17.111313222Z"
}
]
}
```
Issue:
- first element should return price for coin 714 (BNB), it's currently shows `0.000855578129663`, instead of ~`$16`
- first element should say token type: `coin`, not `token` | 1.0 | [Pricing] Reported issues - ```
curl -X POST \
https://blockatlas.trustwalletapp.com/v1/market/ticker \
-H 'Accept: */*' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Cache-Control: no-cache' \
-H 'Connection: keep-alive' \
-H 'Content-Length: 178' \
-H 'Content-Type: application/json' \
-H 'Cookie: __cfduid=d349bc392e8b76d9815c176085f6b8ac21550746682' \
-H 'Host: blockatlas.trustwalletapp.com' \
-H 'Postman-Token: 71d802ba-0dac-4cbd-a5f4-87de054040ba,4c691b51-38d9-4958-95a7-31440d1ad085' \
-H 'User-Agent: PostmanRuntime/7.20.1' \
-H 'cache-control: no-cache' \
-d '{
"currency": "USD",
"assets": [
{
"coin": 714,
"type": "coin"
},
{
"coin": 714,
"type": "token",
"token_id": "BUSD-BD1"
}
]
}'
```
Response:
```
{
"currency": "USD",
"docs": [
{
"coin": 714,
"type": "token",
"price": {
"value": 0.000855578129663,
"change_24h": -72.3719
},
"last_update": "2019-11-27T22:47:12Z"
},
{
"coin": 714,
"token_id": "BUSD-BD1",
"type": "token",
"price": {
"value": 0.9977496937623513,
"change_24h": 3.46
},
"last_update": "2019-11-27T22:58:17.111313222Z"
}
]
}
```
Issue:
- first element should return price for coin 714 (BNB), it's currently shows `0.000855578129663`, instead of ~`$16`
- first element should say token type: `coin`, not `token` | priority | reported issues curl x post h accept h accept encoding gzip deflate h cache control no cache h connection keep alive h content length h content type application json h cookie cfduid h host blockatlas trustwalletapp com h postman token h user agent postmanruntime h cache control no cache d currency usd assets coin type coin coin type token token id busd response currency usd docs coin type token price value change last update coin token id busd type token price value change last update issue first element should return price for coin bnb it s currently shows instead of first element should say token type coin not token | 1 |
733,204 | 25,293,683,847 | IssuesEvent | 2022-11-17 03:50:33 | AtlasOfLivingAustralia/biocollect | https://api.github.com/repos/AtlasOfLivingAustralia/biocollect | closed | Images missing from download | Type - bug Priority - high | https://support.ehelp.edu.au/a/tickets/158784
Actual:
Images have notfound extension.
Reporting server does not have images stored locally. Need to change logic to retrieve it from biocollect.
| 1.0 | Images missing from download - https://support.ehelp.edu.au/a/tickets/158784
Actual:
Images have notfound extension.
Reporting server does not have images stored locally. Need to change logic to retrieve it from biocollect.
| priority | images missing from download actual images have notfound extension reporting server does not have images stored locally need to change logic to retrieve it from biocollect | 1 |
196,119 | 6,924,832,284 | IssuesEvent | 2017-11-30 14:10:16 | crowdAI/crowdai | https://api.github.com/repos/crowdAI/crowdai | closed | Email unsubscribe not working | high priority | One of the participants reported this in the gitter channel :
<img width="1229" alt="screen shot 2017-11-29 at 04 51 42" src="https://user-images.githubusercontent.com/1581312/33357191-1552076c-d4c1-11e7-9f32-71fd291fb03e.png">
| 1.0 | Email unsubscribe not working - One of the participants reported this in the gitter channel :
<img width="1229" alt="screen shot 2017-11-29 at 04 51 42" src="https://user-images.githubusercontent.com/1581312/33357191-1552076c-d4c1-11e7-9f32-71fd291fb03e.png">
| priority | email unsubscribe not working one of the participants reported this in the gitter channel img width alt screen shot at src | 1 |
757,484 | 26,514,524,102 | IssuesEvent | 2023-01-18 19:41:50 | crossplane-contrib/provider-ansible | https://api.github.com/repos/crossplane-contrib/provider-ansible | closed | checkWhenObserve is returning a wrong observation result of an external resource | bug priority/high | <!--
Thank you for helping to improve Crossplane!
Please be sure to search for open issues before raising a new one. We use issues
for bug reports and feature requests. Please find us at https://slack.crossplane.io
for questions, support, and discussion.
-->
### What happened?
<!--
Please let us know what behaviour you expected and how Crossplane diverged from
that behaviour.
-->
the observe lifecycle is not correctly setting `ResourceExists` and `ResourceUpToDate` which is disturbing reconciliation process lifecycle ( `create `/ `update `) [see](https://github.com/crossplane-contrib/provider-ansible/blob/main/internal/controller/ansibleRun/ansibleRun.go#L354-L355)
### How can we reproduce it?
<!--
Help us to reproduce your bug as succinctly and precisely as possible. Artifacts
such as example manifests or a script that triggers the issue are highly
appreciated!
-->
apply this [example](https://github.com/crossplane-contrib/provider-ansible/blob/main/examples/ansible/runPolicy/ansibleRun-checkWhenObserve-policy.yml)
### What environment did it happen in?
Ansible provider version: [v0.4.0](https://github.com/crossplane-contrib/provider-ansible/releases/tag/v0.4.0)
<!--
Include at least the version or commit of Crossplane you were running. Consider
also including your:
* Cloud provider or hardware configuration
* Kubernetes version (use `kubectl version`)
* Kubernetes distribution (e.g. Tectonic, GKE, OpenShift)
* OS (e.g. from /etc/os-release)
* Kernel (e.g. `uname -a`)
-->
| 1.0 | checkWhenObserve is returning a wrong observation result of an external resource - <!--
Thank you for helping to improve Crossplane!
Please be sure to search for open issues before raising a new one. We use issues
for bug reports and feature requests. Please find us at https://slack.crossplane.io
for questions, support, and discussion.
-->
### What happened?
<!--
Please let us know what behaviour you expected and how Crossplane diverged from
that behaviour.
-->
the observe lifecycle is not correctly setting `ResourceExists` and `ResourceUpToDate` which is disturbing reconciliation process lifecycle ( `create `/ `update `) [see](https://github.com/crossplane-contrib/provider-ansible/blob/main/internal/controller/ansibleRun/ansibleRun.go#L354-L355)
### How can we reproduce it?
<!--
Help us to reproduce your bug as succinctly and precisely as possible. Artifacts
such as example manifests or a script that triggers the issue are highly
appreciated!
-->
apply this [example](https://github.com/crossplane-contrib/provider-ansible/blob/main/examples/ansible/runPolicy/ansibleRun-checkWhenObserve-policy.yml)
### What environment did it happen in?
Ansible provider version: [v0.4.0](https://github.com/crossplane-contrib/provider-ansible/releases/tag/v0.4.0)
<!--
Include at least the version or commit of Crossplane you were running. Consider
also including your:
* Cloud provider or hardware configuration
* Kubernetes version (use `kubectl version`)
* Kubernetes distribution (e.g. Tectonic, GKE, OpenShift)
* OS (e.g. from /etc/os-release)
* Kernel (e.g. `uname -a`)
-->
| priority | checkwhenobserve is returning a wrong observation result of an external resource thank you for helping to improve crossplane please be sure to search for open issues before raising a new one we use issues for bug reports and feature requests please find us at for questions support and discussion what happened please let us know what behaviour you expected and how crossplane diverged from that behaviour the observe lifecycle is not correctly setting resourceexists and resourceuptodate which is disturbing reconciliation process lifecycle create update how can we reproduce it help us to reproduce your bug as succinctly and precisely as possible artifacts such as example manifests or a script that triggers the issue are highly appreciated apply this what environment did it happen in ansible provider version include at least the version or commit of crossplane you were running consider also including your cloud provider or hardware configuration kubernetes version use kubectl version kubernetes distribution e g tectonic gke openshift os e g from etc os release kernel e g uname a | 1 |
315,372 | 9,612,613,897 | IssuesEvent | 2019-05-13 09:19:31 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | closed | Add restriction option for validation level in project metadata | Component: Backend Component: Frontend Priority: High Status: In Progress Type: Enhancement | It would be nice if project managers could restrict validation to mappers who have intermediate and/or advanced level.
This would allow project managers to prevent very new mappers from validating, but not just restrict it to validators with the actual validation role.
I guess I would also include people with the actual validation role as well, even if they are not intermediate or advanced mappers.
(thank you for the suggestion @russdeffner ) | 1.0 | Add restriction option for validation level in project metadata - It would be nice if project managers could restrict validation to mappers who have intermediate and/or advanced level.
This would allow project managers to prevent very new mappers from validating, but not just restrict it to validators with the actual validation role.
I guess I would also include people with the actual validation role as well, even if they are not intermediate or advanced mappers.
(thank you for the suggestion @russdeffner ) | priority | add restriction option for validation level in project metadata it would be nice if project managers could restrict validation to mappers who have intermediate and or advanced level this would allow project managers to prevent very new mappers from validating but not just restrict it to validators with the actual validation role i guess i would also include people with the actual validation role as well even if they are not intermediate or advanced mappers thank you for the suggestion russdeffner | 1 |
461,553 | 13,232,282,476 | IssuesEvent | 2020-08-18 13:07:41 | openshift/odo | https://api.github.com/repos/openshift/odo | closed | ingress url does hit for component nodejs in devfile registry | area/devfile area/url kind/bug priority/High | /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Output of `odo version`:**
master
## How did you run odo exactly?
```
$ git clone https://github.com/odo-devfiles/registry
$ cd registry/devfiles
$ odo project create test1234
✓ Project 'test1234' is ready for use
✓ New project created and now using project: test1234
$ odo component create nodejs --devfile ./devfile.yaml --project test1234
Experimental mode is enabled, use at your own risk
Validation
✓ Creating a devfile component from devfile path: /Users/amit/go/src/github.com/odo-devfiles/registry/devfiles/nodejs/devfile.yaml [1ms]
✓ Validating devfile component [368648ns]
Please use `odo push` command to create the component with source deployed
$ odo url create nodejs --port 3000 --host asdf.com --ingress
✓ URL nodejs created for component: nodejs
To apply the URL configuration changes, please use `odo push`
$ odo url list
Found the following URLs for component nodejs
NAME STATE URL PORT SECURE KIND
nodejs Not Pushed http://nodejs.asdf.com 3000 false ingress
There are local changes. Please run 'odo push'.
$ odo push
Validation
✓ Validating the devfile [1ms]
Creating Kubernetes resources for component nodejs
✓ Waiting for component to start [4s]
Applying URL changes
✓ URL nodejs: http://nodejs.asdf.com created
Syncing to component nodejs
✓ Checking files for pushing [1ms]
✓ Syncing files to the component [5s]
Executing devfile commands for component nodejs
✓ Executing install command "npm install", if not running [3s]
✓ Executing run command "npm start", if not running [3s]
Pushing devfile component nodejs
✓ Changes successfully pushed to component
$ odo url list
Found the following URLs for component nodejs
NAME STATE URL PORT SECURE KIND
nodejs Pushed http://nodejs.asdf.com 3000 false ingress
$ curl -ik http://nodejs.asdf.com
curl: (6) Could not resolve host: nodejs.asdf.com
```
## Actual behavior
```
$ curl -ik http://nodejs.asdf.com
curl: (6) Could not resolve host: nodejs.asdf.com
```
## Expected behavior
Should hit the URL
## Any logs, error output, etc?
| 1.0 | ingress url does hit for component nodejs in devfile registry - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Output of `odo version`:**
master
## How did you run odo exactly?
```
$ git clone https://github.com/odo-devfiles/registry
$ cd registry/devfiles
$ odo project create test1234
✓ Project 'test1234' is ready for use
✓ New project created and now using project: test1234
$ odo component create nodejs --devfile ./devfile.yaml --project test1234
Experimental mode is enabled, use at your own risk
Validation
✓ Creating a devfile component from devfile path: /Users/amit/go/src/github.com/odo-devfiles/registry/devfiles/nodejs/devfile.yaml [1ms]
✓ Validating devfile component [368648ns]
Please use `odo push` command to create the component with source deployed
$ odo url create nodejs --port 3000 --host asdf.com --ingress
✓ URL nodejs created for component: nodejs
To apply the URL configuration changes, please use `odo push`
$ odo url list
Found the following URLs for component nodejs
NAME STATE URL PORT SECURE KIND
nodejs Not Pushed http://nodejs.asdf.com 3000 false ingress
There are local changes. Please run 'odo push'.
$ odo push
Validation
✓ Validating the devfile [1ms]
Creating Kubernetes resources for component nodejs
✓ Waiting for component to start [4s]
Applying URL changes
✓ URL nodejs: http://nodejs.asdf.com created
Syncing to component nodejs
✓ Checking files for pushing [1ms]
✓ Syncing files to the component [5s]
Executing devfile commands for component nodejs
✓ Executing install command "npm install", if not running [3s]
✓ Executing run command "npm start", if not running [3s]
Pushing devfile component nodejs
✓ Changes successfully pushed to component
$ odo url list
Found the following URLs for component nodejs
NAME STATE URL PORT SECURE KIND
nodejs Pushed http://nodejs.asdf.com 3000 false ingress
$ curl -ik http://nodejs.asdf.com
curl: (6) Could not resolve host: nodejs.asdf.com
```
## Actual behavior
```
$ curl -ik http://nodejs.asdf.com
curl: (6) Could not resolve host: nodejs.asdf.com
```
## Expected behavior
Should hit the URL
## Any logs, error output, etc?
| priority | ingress url does hit for component nodejs in devfile registry kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using output of odo version master how did you run odo exactly git clone cd registry devfiles odo project create ✓ project is ready for use ✓ new project created and now using project odo component create nodejs devfile devfile yaml project experimental mode is enabled use at your own risk validation ✓ creating a devfile component from devfile path users amit go src github com odo devfiles registry devfiles nodejs devfile yaml ✓ validating devfile component please use odo push command to create the component with source deployed odo url create nodejs port host asdf com ingress ✓ url nodejs created for component nodejs to apply the url configuration changes please use odo push odo url list found the following urls for component nodejs name state url port secure kind nodejs not pushed false ingress there are local changes please run odo push odo push validation ✓ validating the devfile creating kubernetes resources for component nodejs ✓ waiting for component to start applying url changes ✓ url nodejs created syncing to component nodejs ✓ checking files for pushing ✓ syncing files to the component executing devfile commands for component nodejs ✓ executing install command npm install if not running ✓ executing run command npm start if not running pushing devfile component nodejs ✓ changes successfully pushed to component odo url list found the following urls for component nodejs name state url port secure kind nodejs pushed false ingress curl ik curl could not resolve host nodejs asdf com actual behavior curl ik curl could not resolve host nodejs asdf com expected behavior should hit the url any logs error output etc | 1 |
133,062 | 5,196,394,832 | IssuesEvent | 2017-01-23 12:41:21 | write-io/write.io | https://api.github.com/repos/write-io/write.io | closed | Write first Mocha test | priority-high | 1. Open homepage
2. Login with username "RichardFeinman"
3. Test that we got to the game page | 1.0 | Write first Mocha test - 1. Open homepage
2. Login with username "RichardFeinman"
3. Test that we got to the game page | priority | write first mocha test open homepage login with username richardfeinman test that we got to the game page | 1 |
339,951 | 10,264,888,082 | IssuesEvent | 2019-08-22 17:29:20 | cpkm/darts-site | https://api.github.com/repos/cpkm/darts-site | opened | Send email reminders for games | High Priority enhancement | Game reminders should be sent out, e.g. 5 days before and day before a game. This email would also contain the check in/out links #59 #60 | 1.0 | Send email reminders for games - Game reminders should be sent out, e.g. 5 days before and day before a game. This email would also contain the check in/out links #59 #60 | priority | send email reminders for games game reminders should be sent out e g days before and day before a game this email would also contain the check in out links | 1 |
351,520 | 10,519,963,582 | IssuesEvent | 2019-09-29 21:38:51 | texas-justice-initiative/website-nextjs | https://api.github.com/repos/texas-justice-initiative/website-nextjs | opened | set up staging site for netlify | high priority | currently, the only option on the Netlify CMS is to "publish now."
We need to be able to push new changes to the website first to staging, and then to production. Please set up a staging environment in netlify. | 1.0 | set up staging site for netlify - currently, the only option on the Netlify CMS is to "publish now."
We need to be able to push new changes to the website first to staging, and then to production. Please set up a staging environment in netlify. | priority | set up staging site for netlify currently the only option on the netlify cms is to publish now we need to be able to push new changes to the website first to staging and then to production please set up a staging environment in netlify | 1 |
634,136 | 20,327,445,079 | IssuesEvent | 2022-02-18 07:26:43 | bigbinary/neeto-area51 | https://api.github.com/repos/bigbinary/neeto-area51 | opened | We need to upgrade this repo to Rails 7 first | high-priority | Given that this repo is a dependency of all other neeto repos, we'd have to first upgrade this repo to Rails 7.
@Anamika-123 _a
Can you please take this up with the highest priority? In order to understand the file changes please go through:
- The wheel PR: https://github.com/bigbinary/wheel/pull/784 and this comment: https://github.com/bigbinary/wheel/pull/784#issuecomment-1043994549
- I was trying to port neetoEngage to Rails 7. Thought I'd record the steps I took so that others can maybe see it and work alongside. But during that stream I encountered the issue with neetoArea51. Adding the video here so that if you prefer to watch and fix then you can refer to https://www.loom.com/share/a3c4799a83fe431f90ad8c547bf901f8
cc: @unnitallman | 1.0 | We need to upgrade this repo to Rails 7 first - Given that this repo is a dependency of all other neeto repos, we'd have to first upgrade this repo to Rails 7.
@Anamika-123 _a
Can you please take this up with the highest priority? In order to understand the file changes please go through:
- The wheel PR: https://github.com/bigbinary/wheel/pull/784 and this comment: https://github.com/bigbinary/wheel/pull/784#issuecomment-1043994549
- I was trying to port neetoEngage to Rails 7. Thought I'd record the steps I took so that others can maybe see it and work alongside. But during that stream I encountered the issue with neetoArea51. Adding the video here so that if you prefer to watch and fix then you can refer to https://www.loom.com/share/a3c4799a83fe431f90ad8c547bf901f8
cc: @unnitallman | priority | we need to upgrade this repo to rails first given that this repo is a dependency of all other neeto repos we d have to first upgrade this repo to rails anamika a can you please take this up with the highest priority in order to understand the file changes please go through the wheel pr and this comment i was trying to port neetoengage to rails thought i d record the steps i took so that others can maybe see it and work alongside but during that stream i encountered the issue with adding the video here so that if you prefer to watch and fix then you can refer to cc unnitallman | 1 |
370,414 | 10,931,731,566 | IssuesEvent | 2019-11-23 12:38:18 | elszczepano/Reddit-Image-Loader | https://api.github.com/repos/elszczepano/Reddit-Image-Loader | closed | Improve error handlings and error modal | feature high priority | Currently, the error box displays only general information about the error. This information should be more precise. Also, the UI of the error box should be improved and match to mobile devices. | 1.0 | Improve error handlings and error modal - Currently, the error box displays only general information about the error. This information should be more precise. Also, the UI of the error box should be improved and match to mobile devices. | priority | improve error handlings and error modal currently the error box displays only general information about the error this information should be more precise also the ui of the error box should be improved and match to mobile devices | 1 |
642,550 | 20,906,850,882 | IssuesEvent | 2022-03-24 03:49:12 | chao1224/GraphMVP | https://api.github.com/repos/chao1224/GraphMVP | closed | Request to GraphMVP hyper-parameter | High Priority | Recently, I try to reproduce the classification results with codes provided in the link.
In pre-training, I adopted the given parameters in **submit_pre_training_GraphMVP.sh** and default parameters for those are not mentioned in the bash file. In fine-tuning, I just ran 3 seed [0, 1, 2].(I notice that results are also reported with 3 seeds in Table 1 in your paper, but maybe with different seeds).
However, the results I got are quite different, especially on **clintox, hiv and tox21** datasets. Below are my results:
| bbbp | tox21 | bace | clintox | sider | hiv | muv
-- | -- | -- | -- | -- | -- | -- | --
0 | 0.7129 | 0.7382 | 0.7771 | 0.6117 | 0.6043 | 0.7615 | 0.7539
1 | 0.68 | 0.7424 | 0.8006 | 0.5657 | 0.5916 | 0.7389 | 0.7544
2 | 0.7033 | 0.7339 | 0.7741 | 0.5843 | 0.6108 | 0.719 | 0.7289
avg | 0.69873 | 0.73817 | 0.78393 | 0.587233 | 0.60223 | 0.7398 | 0.7457
I think this might be caused by different seeds that I used. Could you provide full parameter settings to reproduce results reported in the Table 1 in your paper. That would be helpful. | 1.0 | Request to GraphMVP hyper-parameter - Recently, I try to reproduce the classification results with codes provided in the link.
In pre-training, I adopted the given parameters in **submit_pre_training_GraphMVP.sh** and default parameters for those are not mentioned in the bash file. In fine-tuning, I just ran 3 seed [0, 1, 2].(I notice that results are also reported with 3 seeds in Table 1 in your paper, but maybe with different seeds).
However, the results I got are quite different, especially on **clintox, hiv and tox21** datasets. Below are my results:
| bbbp | tox21 | bace | clintox | sider | hiv | muv
-- | -- | -- | -- | -- | -- | -- | --
0 | 0.7129 | 0.7382 | 0.7771 | 0.6117 | 0.6043 | 0.7615 | 0.7539
1 | 0.68 | 0.7424 | 0.8006 | 0.5657 | 0.5916 | 0.7389 | 0.7544
2 | 0.7033 | 0.7339 | 0.7741 | 0.5843 | 0.6108 | 0.719 | 0.7289
avg | 0.69873 | 0.73817 | 0.78393 | 0.587233 | 0.60223 | 0.7398 | 0.7457
I think this might be caused by different seeds that I used. Could you provide full parameter settings to reproduce results reported in the Table 1 in your paper. That would be helpful. | priority | request to graphmvp hyper parameter recently i try to reproduce the classification results with codes provided in the link in pre training i adopted the given parameters in submit pre training graphmvp sh and default parameters for those are not mentioned in the bash file in fine tuning i just ran seed i notice that results are also reported with seeds in table in your paper but maybe with different seeds however the results i got are quite different especially on clintox hiv and datasets below are my results bbbp bace clintox sider hiv muv avg i think this might be caused by different seeds that i used could you provide full parameter settings to reproduce results reported in the table in your paper that would be helpful | 1 |
815,884 | 30,576,491,032 | IssuesEvent | 2023-07-21 06:05:06 | rancher/eks-operator | https://api.github.com/repos/rancher/eks-operator | closed | (SURE-5259) EKS Provision Failure w/ Rancher2 Terraform Provider | JIRA area/terraform [zube]: To Triage team/highlander priority/high | Seems very similar to SURE-4066
#### Issue description:
Can't provision an EKS downstream cluster using the Rancher2 TF provider. This is the error we see in the Rancher UI on the cluster as well as inside the eks-config-operator pod logs (it is spammed continuously):
```
time="2022-09-13T23:43:46Z" level=error msg="error syncing 'cattle-global-data/c-ksl6p': handler eks-controller: InvalidParameterException: Launch template details can't be null for Custom ami type node group\n{\n RespMetadata: {\n StatusCode: 400,\n RequestID: \"eca6e2f1-42d5-411f-b2fc-716404d09d13\"\n },\n Message_: \"Launch template details can't be null for Custom ami type node group\"\n}, requeuing"
```
Cloud Trail on the AWS backned is reporting this error for the EventType: UpdateNodegroupVersion
#### Error code
InvalidParameterException
Event Record.
```
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDA4SBL6SADYCHEP64QO",
"arn": "arn:aws:iam::863380606983:user/srvamr-btcsapid",
"accountId": "1111111111111",
"accessKeyId": "XXXXXXXXXXXXXXXXXXXXXXXX",
"userName": "srvamr-btcsapid"
},
"eventTime": "2022-09-02T19:42:50Z",
"eventSource": "eks.amazonaws.com",
"eventName": "UpdateNodegroupVersion",
"awsRegion": "us-east-1",
"sourceIPAddress": "148.168.40.5",
"userAgent": "aws-sdk-go/1.36.7 (go1.16.4; linux; amd64)",
"errorCode": "InvalidParameterException",
"requestParameters": {
"nodegroupName": "pdcs-dev1d-harim-v2-090122-ng1",
"clientRequestToken": "D9CB6CAB-3459-4E09-89F0-DE4CF3BB6CAE",
"name": "pdcs-dev1d-harim-v2-090122",
"version": "1.21"
},
"responseElements": {
"message": "Launch template details can't be null for Custom ami type node group"
},
"requestID": "9b15e35e-8349-4f9f-9586-62e94e253308",
"eventID": "6bbdb28c-3b56-4011-90a6-27f791fdf035",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "863380606983",
"eventCategory": "Management"
}
```
the payload for UpdateNodegroupVersion is expecting LaunchTemplate details which the payload is missing
see: https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateNodegroupVersion.html
The user is using a Launch Template with a custom AMI. It appears that the problem might be because the payload is adding "version": "1.21" as you can see above. The AWS [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html#EKS.Client.update_nodegroup_version) says: " If you specify launchTemplate , and your launch template uses a custom AMI, then don't specify releaseVersion , or the node group update will fail."
So we are wondering if this is the problem here. If so, why is the version being supplied in the payload when it shouldn't be? The user said that when he manually uses the following payload/method with the AWS SDK then it works fine:
```
response = eks.update_nodegroup_version(
clusterName = cluster_name,
nodegroupName = nodeGroup,
launchTemplate = {
'name': launchTemplateName,
},
force=True
)
```
The user said this was working fine 3 weeks ago. Did something change with the AWS SDK or how we handle it? We did notice this but not sure if it is related:
https://github.com/rancher/eks-operator/pull/72
When we looked at the user's AWS console, the EKS cluster was healthy, the Node Groups were created and healthy as well.
#### Business impact:
This is a blocker for them as they are not able to get their automation to work
#### Troubleshooting steps:
I tried to reproduce the problem in-house. I can get a vanilla EKS cluster to work fine. I am currently trying to test it with Launch Templates and a custom AMI like the user, but I am running into some other issues (AWS permissions which I'm working to get resolved). Once my permissions in AWS get fixed, I'm hoping I can reproduce the problem.
#### Workaround:
Is workararound available and implemented? yes/no
What is the workaround:
#### Actual behavior:
EKS cluster with Launch Template and custom AMI should be provisioned successfully
#### Expected behavior:
EKS cluster with Launch Template and custom AMI is not provisioned successfully
| 1.0 | (SURE-5259) EKS Provision Failure w/ Rancher2 Terraform Provider - Seems very similar to SURE-4066
#### Issue description:
Can't provision an EKS downstream cluster using the Rancher2 TF provider. This is the error we see in the Rancher UI on the cluster as well as inside the eks-config-operator pod logs (it is spammed continuously):
```
time="2022-09-13T23:43:46Z" level=error msg="error syncing 'cattle-global-data/c-ksl6p': handler eks-controller: InvalidParameterException: Launch template details can't be null for Custom ami type node group\n{\n RespMetadata: {\n StatusCode: 400,\n RequestID: \"eca6e2f1-42d5-411f-b2fc-716404d09d13\"\n },\n Message_: \"Launch template details can't be null for Custom ami type node group\"\n}, requeuing"
```
Cloud Trail on the AWS backned is reporting this error for the EventType: UpdateNodegroupVersion
#### Error code
InvalidParameterException
Event Record.
```
{
"eventVersion": "1.08",
"userIdentity": {
"type": "IAMUser",
"principalId": "AIDA4SBL6SADYCHEP64QO",
"arn": "arn:aws:iam::863380606983:user/srvamr-btcsapid",
"accountId": "1111111111111",
"accessKeyId": "XXXXXXXXXXXXXXXXXXXXXXXX",
"userName": "srvamr-btcsapid"
},
"eventTime": "2022-09-02T19:42:50Z",
"eventSource": "eks.amazonaws.com",
"eventName": "UpdateNodegroupVersion",
"awsRegion": "us-east-1",
"sourceIPAddress": "148.168.40.5",
"userAgent": "aws-sdk-go/1.36.7 (go1.16.4; linux; amd64)",
"errorCode": "InvalidParameterException",
"requestParameters": {
"nodegroupName": "pdcs-dev1d-harim-v2-090122-ng1",
"clientRequestToken": "D9CB6CAB-3459-4E09-89F0-DE4CF3BB6CAE",
"name": "pdcs-dev1d-harim-v2-090122",
"version": "1.21"
},
"responseElements": {
"message": "Launch template details can't be null for Custom ami type node group"
},
"requestID": "9b15e35e-8349-4f9f-9586-62e94e253308",
"eventID": "6bbdb28c-3b56-4011-90a6-27f791fdf035",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "863380606983",
"eventCategory": "Management"
}
```
the payload for UpdateNodegroupVersion is expecting LaunchTemplate details which the payload is missing
see: https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateNodegroupVersion.html
The user is using a Launch Template with a custom AMI. It appears that the problem might be because the payload is adding "version": "1.21" as you can see above. The AWS [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html#EKS.Client.update_nodegroup_version) says: " If you specify launchTemplate , and your launch template uses a custom AMI, then don't specify releaseVersion , or the node group update will fail."
So we are wondering if this is the problem here. If so, why is the version being supplied in the payload when it shouldn't be? The user said that when he manually uses the following payload/method with the AWS SDK then it works fine:
```
response = eks.update_nodegroup_version(
clusterName = cluster_name,
nodegroupName = nodeGroup,
launchTemplate = {
'name': launchTemplateName,
},
force=True
)
```
The user said this was working fine 3 weeks ago. Did something change with the AWS SDK or how we handle it? We did notice this but not sure if it is related:
https://github.com/rancher/eks-operator/pull/72
When we looked at the user's AWS console, the EKS cluster was healthy, the Node Groups were created and healthy as well.
#### Business impact:
This is a blocker for them as they are not able to get their automation to work
#### Troubleshooting steps:
I tried to reproduce the problem in-house. I can get a vanilla EKS cluster to work fine. I am currently trying to test it with Launch Templates and a custom AMI like the user, but I am running into some other issues (AWS permissions which I'm working to get resolved). Once my permissions in AWS get fixed, I'm hoping I can reproduce the problem.
#### Workaround:
Is workararound available and implemented? yes/no
What is the workaround:
#### Actual behavior:
EKS cluster with Launch Template and custom AMI should be provisioned successfully
#### Expected behavior:
EKS cluster with Launch Template and custom AMI is not provisioned successfully
| priority | sure eks provision failure w terraform provider seems very similar to sure issue description can t provision an eks downstream cluster using the tf provider this is the error we see in the rancher ui on the cluster as well as inside the eks config operator pod logs it is spammed continuously time level error msg error syncing cattle global data c handler eks controller invalidparameterexception launch template details can t be null for custom ami type node group n n respmetadata n statuscode n requestid n n message launch template details can t be null for custom ami type node group n requeuing cloud trail on the aws backned is reporting this error for the eventtype updatenodegroupversion error code invalidparameterexception event record eventversion useridentity type iamuser principalid arn arn aws iam user srvamr btcsapid accountid accesskeyid xxxxxxxxxxxxxxxxxxxxxxxx username srvamr btcsapid eventtime eventsource eks amazonaws com eventname updatenodegroupversion awsregion us east sourceipaddress useragent aws sdk go linux errorcode invalidparameterexception requestparameters nodegroupname pdcs harim clientrequesttoken name pdcs harim version responseelements message launch template details can t be null for custom ami type node group requestid eventid readonly false eventtype awsapicall managementevent true recipientaccountid eventcategory management the payload for updatenodegroupversion is expecting launchtemplate details which the payload is missing see the user is using a launch template with a custom ami it appears that the problem might be because the payload is adding version as you can see above the aws says if you specify launchtemplate and your launch template uses a custom ami then don t specify releaseversion or the node group update will fail so we are wondering if this is the problem here if so why is the version being supplied in the payload when it shouldn t be the user said that when he manually uses the following payload method with the aws sdk then it works fine response eks update nodegroup version clustername cluster name nodegroupname nodegroup launchtemplate name launchtemplatename force true the user said this was working fine weeks ago did something change with the aws sdk or how we handle it we did notice this but not sure if it is related when we looked at the user s aws console the eks cluster was healthy the node groups were created and healthy as well business impact this is a blocker for them as they are not able to get their automation to work troubleshooting steps i tried to reproduce the problem in house i can get a vanilla eks cluster to work fine i am currently trying to test it with launch templates and a custom ami like the user but i am running into some other issues aws permissions which i m working to get resolved once my permissions in aws get fixed i m hoping i can reproduce the problem workaround is workararound available and implemented yes no what is the workaround actual behavior eks cluster with launch template and custom ami should be provisioned successfully expected behavior eks cluster with launch template and custom ami is not provisioned successfully | 1 |
116,053 | 4,696,043,631 | IssuesEvent | 2016-10-12 01:58:10 | jhpoelen/fb-osmose-bridge | https://api.github.com/repos/jhpoelen/fb-osmose-bridge | opened | Starting working on the README file provided in the osmose config zip file | UI High priority | @agruss2 Start working on the README file that is provided in the osmose config zip file and keep @jhpoelen posted about your progress.
The README file lists the species making up each of the functional groups of the OSMOSE model, and details which parameter estimates were obtained through FishBase/SeaLifeBase and which parameter estimates remain to be entered by the user.
@agruss2 Write some text about which parameter estimates are obtained through FishBase/SeaLifeBase and which parameter estimates need to be entered by the user. Then, @jhpoelen will work on further populating the README file so that the user is informed about the species making up each of the functional groups of the OSMOSE model. | 1.0 | Starting working on the README file provided in the osmose config zip file - @agruss2 Start working on the README file that is provided in the osmose config zip file and keep @jhpoelen posted about your progress.
The README file lists the species making up each of the functional groups of the OSMOSE model, and details which parameter estimates were obtained through FishBase/SeaLifeBase and which parameter estimates remain to be entered by the user.
@agruss2 Write some text about which parameter estimates are obtained through FishBase/SeaLifeBase and which parameter estimates need to be entered by the user. Then, @jhpoelen will work on further populating the README file so that the user is informed about the species making up each of the functional groups of the OSMOSE model. | priority | starting working on the readme file provided in the osmose config zip file start working on the readme file that is provided in the osmose config zip file and keep jhpoelen posted about your progress the readme file lists the species making up each of the functional groups of the osmose model and details which parameter estimates were obtained through fishbase sealifebase and which parameter estimates remain to be entered by the user write some text about which parameter estimates are obtained through fishbase sealifebase and which parameter estimates need to be entered by the user then jhpoelen will work on further populating the readme file so that the user is informed about the species making up each of the functional groups of the osmose model | 1 |
346,617 | 10,417,171,704 | IssuesEvent | 2019-09-14 19:21:16 | Ash258/Scoop-GithubActions | https://api.github.com/repos/Ash258/Scoop-GithubActions | closed | `dl_with_cache` is not striping `#/.*` | bug high-priority issue-action | For some reason when url contains rewrite (`#/dl.7z`) it will result as broken URL.
Broken, which should be OK
https://github.com/Ash258/GithubActionsBucketForTesting/issues/164
When there is no rewrite it is OK.
https://github.com/Ash258/GithubActionsBucketForTesting/issues/162 | 1.0 | `dl_with_cache` is not striping `#/.*` - For some reason when url contains rewrite (`#/dl.7z`) it will result as broken URL.
Broken, which should be OK
https://github.com/Ash258/GithubActionsBucketForTesting/issues/164
When there is no rewrite it is OK.
https://github.com/Ash258/GithubActionsBucketForTesting/issues/162 | priority | dl with cache is not striping for some reason when url contains rewrite dl it will result as broken url broken which should be ok when there is no rewrite it is ok | 1 |
25,151 | 2,677,509,540 | IssuesEvent | 2015-03-26 00:24:18 | Ecotrust/forestplanner | https://api.github.com/repos/Ecotrust/forestplanner | reopened | Warn IE users to: DON'T | High Priority | "Forest Planner is not supported on Internet Explorer. Please use one of the following: FireFox, Safari, or Google Chrome."
Ideally we'd want to lock them out of trying to use IE entirely, but a basic javascript "alert" will go a long way to reducing the alienation of our users. | 1.0 | Warn IE users to: DON'T - "Forest Planner is not supported on Internet Explorer. Please use one of the following: FireFox, Safari, or Google Chrome."
Ideally we'd want to lock them out of trying to use IE entirely, but a basic javascript "alert" will go a long way to reducing the alienation of our users. | priority | warn ie users to don t forest planner is not supported on internet explorer please use one of the following firefox safari or google chrome ideally we d want to lock them out of trying to use ie entirely but a basic javascript alert will go a long way to reducing the alienation of our users | 1 |
524,036 | 15,194,920,725 | IssuesEvent | 2021-02-16 05:05:09 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Comments not showing on custom front page | NEXT UPDATE [Priority: HIGH] bug | Issue occurring from version 1.0.71
The form is getting displayed in version 1.0.70
https://secure.helpscout.net/conversation/1404633711/176880?folderId=2632030
| 1.0 | Comments not showing on custom front page - Issue occurring from version 1.0.71
The form is getting displayed in version 1.0.70
https://secure.helpscout.net/conversation/1404633711/176880?folderId=2632030
| priority | comments not showing on custom front page issue occurring from version the form is getting displayed in version | 1 |
200,170 | 7,000,933,450 | IssuesEvent | 2017-12-18 08:11:00 | rotorgames/Rg.Plugins.Popup | https://api.github.com/repos/rotorgames/Rg.Plugins.Popup | closed | Prevent crash on Android when clicking outside to dismiss the dialog - 1.1.0-pre3 | enhancement priority-high | My app was crashing on Android (every API level I tested up to API 27).
I forked and build from the sources and was able to pinpoint the issue.
Cannot issue a PR (yet) as I also converted all the project to .NET Standard 2.0
Here is the diff patch:
``` src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs b/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
index 54f9ee4..e12c1bb 100644
--- a/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
+++ b/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
@@ -151,7 +151,7 @@ namespace Rg.Plugins.Popup.Droid.Renderers
_gestureDetector.OnTouchEvent(e);
- if (!CurrentElement.InputTransparent)
+ if (CurrentElement != null && !CurrentElement.InputTransparent)
return baseValue;
return false;
``` | 1.0 | Prevent crash on Android when clicking outside to dismiss the dialog - 1.1.0-pre3 - My app was crashing on Android (every API level I tested up to API 27).
I forked and build from the sources and was able to pinpoint the issue.
Cannot issue a PR (yet) as I also converted all the project to .NET Standard 2.0
Here is the diff patch:
``` src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs b/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
index 54f9ee4..e12c1bb 100644
--- a/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
+++ b/src/Rg.Plugins.Popup.Droid/Renderers/PopupPageRenderer.cs
@@ -151,7 +151,7 @@ namespace Rg.Plugins.Popup.Droid.Renderers
_gestureDetector.OnTouchEvent(e);
- if (!CurrentElement.InputTransparent)
+ if (CurrentElement != null && !CurrentElement.InputTransparent)
return baseValue;
return false;
``` | priority | prevent crash on android when clicking outside to dismiss the dialog my app was crashing on android every api level i tested up to api i forked and build from the sources and was able to pinpoint the issue cannot issue a pr yet as i also converted all the project to net standard here is the diff patch src rg plugins popup droid renderers popuppagerenderer cs file changed insertion deletion diff git a src rg plugins popup droid renderers popuppagerenderer cs b src rg plugins popup droid renderers popuppagerenderer cs index a src rg plugins popup droid renderers popuppagerenderer cs b src rg plugins popup droid renderers popuppagerenderer cs namespace rg plugins popup droid renderers gesturedetector ontouchevent e if currentelement inputtransparent if currentelement null currentelement inputtransparent return basevalue return false | 1 |
173,094 | 6,520,033,237 | IssuesEvent | 2017-08-28 15:00:57 | zom/Zom-iOS | https://api.github.com/repos/zom/Zom-iOS | closed | Add variable within the invite link to determine that it’s a migrated account. | FOR REVIEW high-priority unblockability | If the invite link is a migrated account, archive the old buddy and chat with them. | 1.0 | Add variable within the invite link to determine that it’s a migrated account. - If the invite link is a migrated account, archive the old buddy and chat with them. | priority | add variable within the invite link to determine that it’s a migrated account if the invite link is a migrated account archive the old buddy and chat with them | 1 |
192,878 | 6,877,492,383 | IssuesEvent | 2017-11-20 08:15:30 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | opened | Ability to filter in sunstone on resource usage (CPU, RAM, NETWORK, DISK) | Category: Sunstone Priority: High Status: Pending Tracker: Backlog | ---
Author Name: **Stefan Kooman** (Stefan Kooman)
Original Redmine Issue: 2925, https://dev.opennebula.org/issues/2925
Original Date: 2014-05-16
---
Besides the current possibility to filter on "ID, Owner, Group, Name, Status, Host, IP" it would be nice to be able to filter on (REAL/UTILIZED) Resources as well (CPU, RAM, NETWORK, DISK).
| 1.0 | Ability to filter in sunstone on resource usage (CPU, RAM, NETWORK, DISK) - ---
Author Name: **Stefan Kooman** (Stefan Kooman)
Original Redmine Issue: 2925, https://dev.opennebula.org/issues/2925
Original Date: 2014-05-16
---
Besides the current possibility to filter on "ID, Owner, Group, Name, Status, Host, IP" it would be nice to be able to filter on (REAL/UTILIZED) Resources as well (CPU, RAM, NETWORK, DISK).
| priority | ability to filter in sunstone on resource usage cpu ram network disk author name stefan kooman stefan kooman original redmine issue original date besides the current possibility to filter on id owner group name status host ip it would be nice to be able to filter on real utilized resources as well cpu ram network disk | 1 |
315,210 | 9,607,789,206 | IssuesEvent | 2019-05-11 22:20:25 | bounswe/bounswe2019group5 | https://api.github.com/repos/bounswe/bounswe2019group5 | closed | Randomizing async function createExercise(wordArray) | Bug Fix Effort: Medium Priority: High Status: Review Needed | In the create-exercise.js currently the correct answer appears at the first row of one exercise. It should be randomized. | 1.0 | Randomizing async function createExercise(wordArray) - In the create-exercise.js currently the correct answer appears at the first row of one exercise. It should be randomized. | priority | randomizing async function createexercise wordarray in the create exercise js currently the correct answer appears at the first row of one exercise it should be randomized | 1 |
306,176 | 9,381,816,359 | IssuesEvent | 2019-04-04 20:37:04 | zeit/next.js | https://api.github.com/repos/zeit/next.js | closed | 8.0.4 weird issue with firefox & query parameters in the url | Type: Bug priority: very high | # Bug report
there seems to be a regression in 8.0.4 specifically for newer versions of Firefox relating to next/router code
## Describe the bug
when landing on a page that contains query parameters, the browser becomes "locked" to that page and programmatically or manually navigating to a different same-domain page insta-redirects back to the original. note that this does not start happening until a query parameter is involved in the url, totally bizarre
## System information
- OS: macOS
- Browser (if applies) firefox 66
- Version of Next.js: 8.0.4
## Additional context
Downgrading to 8.0.3 fixes this issue, so it's definitely something in nextjs. The two ways my (private) project moves between pages is typical anchor links and using `window.location.assign`.
| 1.0 | 8.0.4 weird issue with firefox & query parameters in the url - # Bug report
there seems to be a regression in 8.0.4 specifically for newer versions of Firefox relating to next/router code
## Describe the bug
when landing on a page that contains query parameters, the browser becomes "locked" to that page and programmatically or manually navigating to a different same-domain page insta-redirects back to the original. note that this does not start happening until a query parameter is involved in the url, totally bizarre
## System information
- OS: macOS
- Browser (if applies) firefox 66
- Version of Next.js: 8.0.4
## Additional context
Downgrading to 8.0.3 fixes this issue, so it's definitely something in nextjs. The two ways my (private) project moves between pages is typical anchor links and using `window.location.assign`.
| priority | weird issue with firefox query parameters in the url bug report there seems to be a regression in specifically for newer versions of firefox relating to next router code describe the bug when landing on a page that contains query parameters the browser becomes locked to that page and programmatically or manually navigating to a different same domain page insta redirects back to the original note that this does not start happening until a query parameter is involved in the url totally bizarre system information os macos browser if applies firefox version of next js additional context downgrading to fixes this issue so it s definitely something in nextjs the two ways my private project moves between pages is typical anchor links and using window location assign | 1 |
431,423 | 12,479,258,954 | IssuesEvent | 2020-05-29 17:56:03 | RooftopAcademy/workflow-methodologies | https://api.github.com/repos/RooftopAcademy/workflow-methodologies | closed | Documentar temario | priority:high size:1 | Documentar el temario, identificando las diferentes clases y los temas tratados en cada una | 1.0 | Documentar temario - Documentar el temario, identificando las diferentes clases y los temas tratados en cada una | priority | documentar temario documentar el temario identificando las diferentes clases y los temas tratados en cada una | 1 |
132,464 | 5,186,615,030 | IssuesEvent | 2017-01-20 14:36:59 | eMoflon/emoflon-tool | https://api.github.com/repos/eMoflon/emoflon-tool | closed | Security exception due to signer information | bug eMoflon-TGG high-priority |
I get the following security exception for example in TGG0-Tests:
**signer information does not match signer information of other classes in the same package**
I guess the problem is the following:
The package csp.constraints in tgg.language-Plugin is signed. User-Defines CSP-constraints, however, are specified in a user project under the same package (csp.constraints). So we have signed eMoflon classes + unsigned user classes in the same package.
In fact, situations where a user project has same packages as eMoflon plugins are not rare. What can we do about this?
| 1.0 | Security exception due to signer information -
I get the following security exception for example in TGG0-Tests:
**signer information does not match signer information of other classes in the same package**
I guess the problem is the following:
The package csp.constraints in tgg.language-Plugin is signed. User-Defines CSP-constraints, however, are specified in a user project under the same package (csp.constraints). So we have signed eMoflon classes + unsigned user classes in the same package.
In fact, situations where a user project has same packages as eMoflon plugins are not rare. What can we do about this?
| priority | security exception due to signer information i get the following security exception for example in tests signer information does not match signer information of other classes in the same package i guess the problem is the following the package csp constraints in tgg language plugin is signed user defines csp constraints however are specified in a user project under the same package csp constraints so we have signed emoflon classes unsigned user classes in the same package in fact situations where a user project has same packages as emoflon plugins are not rare what can we do about this | 1 |
716,326 | 24,628,410,786 | IssuesEvent | 2022-10-16 20:11:16 | PowerfulBacon/CorgEng | https://api.github.com/repos/PowerfulBacon/CorgEng | opened | Synchronous events prevent the game from closing | Priority: High External Project Dependency | If something is waiting for a synchronous event to be completed, but the target subsystem shuts down then the waiting thread will be frozen forever which prevents the game from closing. | 1.0 | Synchronous events prevent the game from closing - If something is waiting for a synchronous event to be completed, but the target subsystem shuts down then the waiting thread will be frozen forever which prevents the game from closing. | priority | synchronous events prevent the game from closing if something is waiting for a synchronous event to be completed but the target subsystem shuts down then the waiting thread will be frozen forever which prevents the game from closing | 1 |
699 | 2,502,150,095 | IssuesEvent | 2015-01-09 04:08:36 | thedouglenz/monies-suck-2 | https://api.github.com/repos/thedouglenz/monies-suck-2 | opened | When viewing the radial chart, there should be a title that says the current month | enhancement High-Priority | ... because the bar chart show 3 months worth of data and the radial chart only shows 1. It should make the user aware before they start thinking the numbers don't add up because they are familiar with the 3 month bar chart! | 1.0 | When viewing the radial chart, there should be a title that says the current month - ... because the bar chart show 3 months worth of data and the radial chart only shows 1. It should make the user aware before they start thinking the numbers don't add up because they are familiar with the 3 month bar chart! | priority | when viewing the radial chart there should be a title that says the current month because the bar chart show months worth of data and the radial chart only shows it should make the user aware before they start thinking the numbers don t add up because they are familiar with the month bar chart | 1 |
25,307 | 2,679,071,529 | IssuesEvent | 2015-03-26 14:55:55 | drvinceknight/Axelrod | https://api.github.com/repos/drvinceknight/Axelrod | closed | Parallelisation hangs | bug High priority | As far as I can tell this is something to do with #122 but I have no idea.
It hangs at this stage:
```
➜ Axelrod git:(master) ✗ python run_tournament.py -p 3
Starting basic_strategies tournament with 10 round robins of 200 turns per pair.
Passing cache with 0 entries to basic_strategies tournament
Running repetitions with 3 parallel processes
Finished basic_strategies tournament in 0.0s
Starting ecological variant of basic_strategies
Finished ecological variant of basic_strategies in 0.0s
Cache now has 10 entries
Finished all basic_strategies tasks in 1.3s
Starting strategies tournament with 10 round robins of 200 turns per pair.
Passing cache with 10 entries to strategies tournament
Running repetitions with 3 parallel processes
```` | 1.0 | Parallelisation hangs - As far as I can tell this is something to do with #122 but I have no idea.
It hangs at this stage:
```
➜ Axelrod git:(master) ✗ python run_tournament.py -p 3
Starting basic_strategies tournament with 10 round robins of 200 turns per pair.
Passing cache with 0 entries to basic_strategies tournament
Running repetitions with 3 parallel processes
Finished basic_strategies tournament in 0.0s
Starting ecological variant of basic_strategies
Finished ecological variant of basic_strategies in 0.0s
Cache now has 10 entries
Finished all basic_strategies tasks in 1.3s
Starting strategies tournament with 10 round robins of 200 turns per pair.
Passing cache with 10 entries to strategies tournament
Running repetitions with 3 parallel processes
```` | priority | parallelisation hangs as far as i can tell this is something to do with but i have no idea it hangs at this stage ➜ axelrod git master ✗ python run tournament py p starting basic strategies tournament with round robins of turns per pair passing cache with entries to basic strategies tournament running repetitions with parallel processes finished basic strategies tournament in starting ecological variant of basic strategies finished ecological variant of basic strategies in cache now has entries finished all basic strategies tasks in starting strategies tournament with round robins of turns per pair passing cache with entries to strategies tournament running repetitions with parallel processes | 1 |
161,648 | 6,132,605,618 | IssuesEvent | 2017-06-25 04:27:45 | BytesClub/chalk | https://api.github.com/repos/BytesClub/chalk | closed | Disable raw mode on exit | difficulty: medium Priority: HIGH | Currently, we enter the raw mode whenever the program starts, but it needs to be disabled when
the program terminates, either via the `main()` method or via an `exit()` call. The initial properties of
the terminal should be stored on entering the raw mode and it should be reset when the program exits. | 1.0 | Disable raw mode on exit - Currently, we enter the raw mode whenever the program starts, but it needs to be disabled when
the program terminates, either via the `main()` method or via an `exit()` call. The initial properties of
the terminal should be stored on entering the raw mode and it should be reset when the program exits. | priority | disable raw mode on exit currently we enter the raw mode whenever the program starts but it needs to be disabled when the program terminates either via the main method or via an exit call the initial properties of the terminal should be stored on entering the raw mode and it should be reset when the program exits | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.