Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
370,067
| 10,924,843,727
|
IssuesEvent
|
2019-11-22 11:02:29
|
bounswe/bounswe2019group10
|
https://api.github.com/repos/bounswe/bounswe2019group10
|
closed
|
Implement endpoint for users to learn their writing scores
|
Priority: High Relation: Backend
|
Currently the writing endpoint does not support the user to learn their scores. Implement the endpoint so that users can learn their scores.
|
1.0
|
Implement endpoint for users to learn their writing scores - Currently the writing endpoint does not support the user to learn their scores. Implement the endpoint so that users can learn their scores.
|
priority
|
implement endpoint for users to learn their writing scores currently the writing endpoint does not support the user to learn their scores implement the endpoint so that users can learn their scores
| 1
|
182,094
| 6,666,863,886
|
IssuesEvent
|
2017-10-03 10:03:48
|
Maslosoft/IlmatarWidgets
|
https://api.github.com/repos/Maslosoft/IlmatarWidgets
|
closed
|
Action grid needs to prevent links clicking
|
bug have-workaround high-priority important
|
This has to be done either:
1. **Remove <s>Link and </s>Action cell decorators (preferred)**
2. Overwrite events (is possible)
|
1.0
|
Action grid needs to prevent links clicking - This has to be done either:
1. **Remove <s>Link and </s>Action cell decorators (preferred)**
2. Overwrite events (is possible)
|
priority
|
action grid needs to prevent links clicking this has to be done either remove link and action cell decorators preferred overwrite events is possible
| 1
|
800,676
| 28,374,805,850
|
IssuesEvent
|
2023-04-12 19:55:58
|
VoltanFr/memcheck
|
https://api.github.com/repos/VoltanFr/memcheck
|
closed
|
Create function-based indexes on db text fields for optimized search
|
database performance complexity-high priority-high page-search
|
By default, we search case insensitively.
In the [`SearchCards` class](https://github.com/VoltanFr/memcheck/blob/master/MemCheck.Application/Searching/SearchCards.cs):
```cs
cardsFilteredWithExludedTags.Where(card =>
EF.Functions.Like(card.FrontSide, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.BackSide, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.AdditionalInfo, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.References, $"%{request.RequiredText}%")
)
```
Give a try to creating function-based indexes (aka _computed columns_) on the text fields, to improve perf. Confirm that it will improve more than cost, and create them. About cost, we need to consider the volume used and the maintaining of the computed columns.
By the way, there is currently no index on the text fields (as per the code above, that would be useless).
Resources...
- [Introduction to function-based indexes](https://use-the-index-luke.com/sql/where-clause/functions/case-insensitive-search)
- [Case-insensitive Search Operations](https://techcommunity.microsoft.com/t5/sql-server-blog/case-insensitive-search-operations/ba-p/383199)
Note that the second resource link says `While this example presents a solution for a case insensitive search using the UPPER function, the solution can be easily be extended for use with other functions as well`. I don't know what that means, what other types of computed columns we could use, but this might be worth exploring.
|
1.0
|
Create function-based indexes on db text fields for optimized search - By default, we search case insensitively.
In the [`SearchCards` class](https://github.com/VoltanFr/memcheck/blob/master/MemCheck.Application/Searching/SearchCards.cs):
```cs
cardsFilteredWithExludedTags.Where(card =>
EF.Functions.Like(card.FrontSide, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.BackSide, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.AdditionalInfo, $"%{request.RequiredText}%")
|| EF.Functions.Like(card.References, $"%{request.RequiredText}%")
)
```
Give a try to creating function-based indexes (aka _computed columns_) on the text fields, to improve perf. Confirm that it will improve more than cost, and create them. About cost, we need to consider the volume used and the maintaining of the computed columns.
By the way, there is currently no index on the text fields (as per the code above, that would be useless).
Resources...
- [Introduction to function-based indexes](https://use-the-index-luke.com/sql/where-clause/functions/case-insensitive-search)
- [Case-insensitive Search Operations](https://techcommunity.microsoft.com/t5/sql-server-blog/case-insensitive-search-operations/ba-p/383199)
Note that the second resource link says `While this example presents a solution for a case insensitive search using the UPPER function, the solution can be easily be extended for use with other functions as well`. I don't know what that means, what other types of computed columns we could use, but this might be worth exploring.
|
priority
|
create function based indexes on db text fields for optimized search by default we search case insensitively in the cs cardsfilteredwithexludedtags where card ef functions like card frontside request requiredtext ef functions like card backside request requiredtext ef functions like card additionalinfo request requiredtext ef functions like card references request requiredtext give a try to creating function based indexes aka computed columns on the text fields to improve perf confirm that it will improve more than cost and create them about cost we need to consider the volume used and the maintaining of the computed columns by the way there is currently no index on the text fields as per the code above that would be useless resources note that the second resource link says while this example presents a solution for a case insensitive search using the upper function the solution can be easily be extended for use with other functions as well i don t know what that means what other types of computed columns we could use but this might be worth exploring
| 1
|
645,112
| 20,994,884,939
|
IssuesEvent
|
2022-03-29 12:43:26
|
cryostatio/cryostat
|
https://api.github.com/repos/cryostatio/cryostat
|
closed
|
Label for recording template/type is incompatible with GraphQL filtering/k8s label selectors
|
bug high-priority
|
> @ebaron I just realized we have a grammar incompatibility between k8s label selectors and our current form of template specifier. We use ex. `template=Profiling,type=TARGET` to specify the event template to use to start a recording, and we capture that string verbatim and store it in the metadata labels attached to a recording (active or archived), with the map key `template`. For example:
```json
{
"data": {
"environmentNodes": [
{
"descendantTargets": [
{
"recordings": {
"active": [
{
"metadata": {
"labels": {
"template": "template=Profiling,type=TARGET"
}
},
"name": "foo",
"state": "RUNNING"
}
],
"archived": []
}
}
],
"labels": {},
"name": "JDP",
"nodeType": "Realm"
}
]
}
}
```
> This is incompatible with both the equality-based and set-based k8s label selector syntaxes. The equality style expression would look like `template = template=Profiling,type=TARGET` and the set style expression would look like `template in (template=Profiling,type=TARGET)`. Both of these are ambiguous and invalid because our `template=Foo[,type=TYPE]` expression is not a valid k8s label value.
> Possible solutions:
> 1. Maintain the existing `templateLabel` special case label selector style, where the filter input simply takes the expected value of the label, and not a full label selector expression. All other kinds of label can be selected normally with k8s selector style.
> 2. Change our `template=Foo[,type=TYPE]` syntax. Doesn't seem appropriate for 2.1.0.
> 2b. We could alternately maintain the same input syntax, but replace the delimiters to something k8s label selector compatible when persisting them into recording metadata.
> 3. Implement some more specialized parser for the label selectors that can handle this ambiguous case for our specific scenario, but this is pretty hacky and can still end up being ambiguous with our template specifier syntax anyway.
> Thoughts? For 2.1.0, option 1 seems to make the most sense to me, or maybe 2b. For 3.0 we can change the event specifier syntax and avoid this issue entirely.
_Originally posted by @andrewazores in https://github.com/cryostatio/cryostat/issues/825#issuecomment-1079222248_
|
1.0
|
Label for recording template/type is incompatible with GraphQL filtering/k8s label selectors - > @ebaron I just realized we have a grammar incompatibility between k8s label selectors and our current form of template specifier. We use ex. `template=Profiling,type=TARGET` to specify the event template to use to start a recording, and we capture that string verbatim and store it in the metadata labels attached to a recording (active or archived), with the map key `template`. For example:
```json
{
"data": {
"environmentNodes": [
{
"descendantTargets": [
{
"recordings": {
"active": [
{
"metadata": {
"labels": {
"template": "template=Profiling,type=TARGET"
}
},
"name": "foo",
"state": "RUNNING"
}
],
"archived": []
}
}
],
"labels": {},
"name": "JDP",
"nodeType": "Realm"
}
]
}
}
```
> This is incompatible with both the equality-based and set-based k8s label selector syntaxes. The equality style expression would look like `template = template=Profiling,type=TARGET` and the set style expression would look like `template in (template=Profiling,type=TARGET)`. Both of these are ambiguous and invalid because our `template=Foo[,type=TYPE]` expression is not a valid k8s label value.
> Possible solutions:
> 1. Maintain the existing `templateLabel` special case label selector style, where the filter input simply takes the expected value of the label, and not a full label selector expression. All other kinds of label can be selected normally with k8s selector style.
> 2. Change our `template=Foo[,type=TYPE]` syntax. Doesn't seem appropriate for 2.1.0.
> 2b. We could alternately maintain the same input syntax, but replace the delimiters to something k8s label selector compatible when persisting them into recording metadata.
> 3. Implement some more specialized parser for the label selectors that can handle this ambiguous case for our specific scenario, but this is pretty hacky and can still end up being ambiguous with our template specifier syntax anyway.
> Thoughts? For 2.1.0, option 1 seems to make the most sense to me, or maybe 2b. For 3.0 we can change the event specifier syntax and avoid this issue entirely.
_Originally posted by @andrewazores in https://github.com/cryostatio/cryostat/issues/825#issuecomment-1079222248_
|
priority
|
label for recording template type is incompatible with graphql filtering label selectors ebaron i just realized we have a grammar incompatibility between label selectors and our current form of template specifier we use ex template profiling type target to specify the event template to use to start a recording and we capture that string verbatim and store it in the metadata labels attached to a recording active or archived with the map key template for example json data environmentnodes descendanttargets recordings active metadata labels template template profiling type target name foo state running archived labels name jdp nodetype realm this is incompatible with both the equality based and set based label selector syntaxes the equality style expression would look like template template profiling type target and the set style expression would look like template in template profiling type target both of these are ambiguous and invalid because our template foo expression is not a valid label value possible solutions maintain the existing templatelabel special case label selector style where the filter input simply takes the expected value of the label and not a full label selector expression all other kinds of label can be selected normally with selector style change our template foo syntax doesn t seem appropriate for we could alternately maintain the same input syntax but replace the delimiters to something label selector compatible when persisting them into recording metadata implement some more specialized parser for the label selectors that can handle this ambiguous case for our specific scenario but this is pretty hacky and can still end up being ambiguous with our template specifier syntax anyway thoughts for option seems to make the most sense to me or maybe for we can change the event specifier syntax and avoid this issue entirely originally posted by andrewazores in
| 1
|
294,374
| 9,022,794,953
|
IssuesEvent
|
2019-02-07 03:33:49
|
RoboJackets/robocup-software
|
https://api.github.com/repos/RoboJackets/robocup-software
|
opened
|
Path planner occasionally doesn't create a continuous path in the velocity state
|
area / planning-motion exp / master (4) priority / high status / new type / bug
|
This is specific with partial paths and re-planning every single frame. I haven't tested anything else.
When there is a path from the previous frame, the first X seconds are taken off and used as the initial part of the next path. This sometimes produces a situation where the second path and the first path have a discontinuity in the velocity state. It seems specific to large changes in the path target where the previous was a straight line path and the second path being appended is a hard turn in any of the directions.
Here are a sample of saddle points where the first point set is the last pos/vel of the previous path and the second set point is the first pos/vel of the appended path.
```
Run 1
Point(-1.13272, 1.78457)Point(-0.12078, -0.297487) - Prev path
Point(-1.13272, 1.78457)Point(-0.231728, -0.570758) - Appended path
```
```
Run 2
Point(-1.1357, 1.77774)Point(-0.109906, -0.254372) - Prev path
Point(-1.1357, 1.77774)Point(-0.175222, -0.405543) - Appended path
```
This picture doesn't correspond to the above data, it's from a separate run, but it shows the previous path, the new path, and a later picture of the velocity discontinuity.

Initial path

Next frames appended path

The discontinuity at a later point
Here is another picture of it happening over a larger path. You can see the fast transitions from red to a deep blue.

[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R136) is the previous path snipping.
[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R229) is where we create the new path.
[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R236) is where we combine the two paths together.
And finally [here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R241) is the returning of the path.
My first thought is that we may be trying to command an impossible path and it's loosing the initial velocity constraint in order to solve it.
|
1.0
|
Path planner occasionally doesn't create a continuous path in the velocity state - This is specific with partial paths and re-planning every single frame. I haven't tested anything else.
When there is a path from the previous frame, the first X seconds are taken off and used as the initial part of the next path. This sometimes produces a situation where the second path and the first path have a discontinuity in the velocity state. It seems specific to large changes in the path target where the previous was a straight line path and the second path being appended is a hard turn in any of the directions.
Here are a sample of saddle points where the first point set is the last pos/vel of the previous path and the second set point is the first pos/vel of the appended path.
```
Run 1
Point(-1.13272, 1.78457)Point(-0.12078, -0.297487) - Prev path
Point(-1.13272, 1.78457)Point(-0.231728, -0.570758) - Appended path
```
```
Run 2
Point(-1.1357, 1.77774)Point(-0.109906, -0.254372) - Prev path
Point(-1.1357, 1.77774)Point(-0.175222, -0.405543) - Appended path
```
This picture doesn't correspond to the above data, it's from a separate run, but it shows the previous path, the new path, and a later picture of the velocity discontinuity.

Initial path

Next frames appended path

The discontinuity at a later point
Here is another picture of it happening over a larger path. You can see the fast transitions from red to a deep blue.

[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R136) is the previous path snipping.
[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R229) is where we create the new path.
[Here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R236) is where we combine the two paths together.
And finally [here](https://github.com/RoboJackets/robocup-software/pull/1180/files?utf8=%E2%9C%93&diff=unified#diff-bfec79fac28b5f67abeb89d135d4b0c8R241) is the returning of the path.
My first thought is that we may be trying to command an impossible path and it's loosing the initial velocity constraint in order to solve it.
|
priority
|
path planner occasionally doesn t create a continuous path in the velocity state this is specific with partial paths and re planning every single frame i haven t tested anything else when there is a path from the previous frame the first x seconds are taken off and used as the initial part of the next path this sometimes produces a situation where the second path and the first path have a discontinuity in the velocity state it seems specific to large changes in the path target where the previous was a straight line path and the second path being appended is a hard turn in any of the directions here are a sample of saddle points where the first point set is the last pos vel of the previous path and the second set point is the first pos vel of the appended path run point point prev path point point appended path run point point prev path point point appended path this picture doesn t correspond to the above data it s from a separate run but it shows the previous path the new path and a later picture of the velocity discontinuity initial path next frames appended path the discontinuity at a later point here is another picture of it happening over a larger path you can see the fast transitions from red to a deep blue is the previous path snipping is where we create the new path is where we combine the two paths together and finally is the returning of the path my first thought is that we may be trying to command an impossible path and it s loosing the initial velocity constraint in order to solve it
| 1
|
462,692
| 13,251,720,486
|
IssuesEvent
|
2020-08-20 03:02:22
|
phetsims/gravity-and-orbits
|
https://api.github.com/repos/phetsims/gravity-and-orbits
|
closed
|
Use DragListener instead of SimpleDragHandler and MovableDragHandler
|
priority:2-high status:ready-for-review
|
Use DragListener instead of SimpleDragHandler and MovableDragHandler. Not necessary for dev version but would be nice for production RC.
|
1.0
|
Use DragListener instead of SimpleDragHandler and MovableDragHandler - Use DragListener instead of SimpleDragHandler and MovableDragHandler. Not necessary for dev version but would be nice for production RC.
|
priority
|
use draglistener instead of simpledraghandler and movabledraghandler use draglistener instead of simpledraghandler and movabledraghandler not necessary for dev version but would be nice for production rc
| 1
|
343,287
| 10,327,548,077
|
IssuesEvent
|
2019-09-02 07:18:02
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Inventory breaks and rejects new contents, saying it's full, when there is nothing in the carried inventory. [master branch/9.0]
|
High Priority
|

Often times I'll give myself a block, place it down, then try to give myself another and it will say my inventory is full.
The only way to fix it that I've found is using /dump then giving myself the item again.
|
1.0
|
Inventory breaks and rejects new contents, saying it's full, when there is nothing in the carried inventory. [master branch/9.0] - 
Often times I'll give myself a block, place it down, then try to give myself another and it will say my inventory is full.
The only way to fix it that I've found is using /dump then giving myself the item again.
|
priority
|
inventory breaks and rejects new contents saying it s full when there is nothing in the carried inventory often times i ll give myself a block place it down then try to give myself another and it will say my inventory is full the only way to fix it that i ve found is using dump then giving myself the item again
| 1
|
224,714
| 7,472,057,069
|
IssuesEvent
|
2018-04-03 11:21:39
|
wso2/product-ei
|
https://api.github.com/repos/wso2/product-ei
|
opened
|
Provide the link to download connectors
|
Priority/High Type/Docs
|
**Description:**
Provide the link to download connectors in the doc [1] under the below instruction
`If you have already downloaded the connectors, select the Connector location option and browse to the connector file from the file system. Click Finish.`
[1] https://docs.wso2.com/display/EI611/Working+with+Connectors+via+Tooling
|
1.0
|
Provide the link to download connectors - **Description:**
Provide the link to download connectors in the doc [1] under the below instruction
`If you have already downloaded the connectors, select the Connector location option and browse to the connector file from the file system. Click Finish.`
[1] https://docs.wso2.com/display/EI611/Working+with+Connectors+via+Tooling
|
priority
|
provide the link to download connectors description provide the link to download connectors in the doc under the below instruction if you have already downloaded the connectors select the connector location option and browse to the connector file from the file system click finish
| 1
|
436,473
| 12,550,692,124
|
IssuesEvent
|
2020-06-06 12:06:21
|
zairza-cetb/bench-routes
|
https://api.github.com/repos/zairza-cetb/bench-routes
|
closed
|
Board: release alpha-3
|
next-release priority:high
|
Following are the list of features we would love to cover for alpha-3 version:
### API
- [ ] Start or stop collector through the dashboard
### Persistent connection
- [ ] live updates in dashboard
### Dashboard
- [ ] notifications as a toast
- [ ] Collector support in graphs
### TSDB
- [ ] saving time-series as binary data instead of JSON (for improving space complexity)
**Notes:**
1. We plan to stick to v1.0 to alpha-3 release. However, alpha-4 or alpha-5 should have v1.1 dashboard.
Please feel free to comment and update this board.
|
1.0
|
Board: release alpha-3 - Following are the list of features we would love to cover for alpha-3 version:
### API
- [ ] Start or stop collector through the dashboard
### Persistent connection
- [ ] live updates in dashboard
### Dashboard
- [ ] notifications as a toast
- [ ] Collector support in graphs
### TSDB
- [ ] saving time-series as binary data instead of JSON (for improving space complexity)
**Notes:**
1. We plan to stick to v1.0 to alpha-3 release. However, alpha-4 or alpha-5 should have v1.1 dashboard.
Please feel free to comment and update this board.
|
priority
|
board release alpha following are the list of features we would love to cover for alpha version api start or stop collector through the dashboard persistent connection live updates in dashboard dashboard notifications as a toast collector support in graphs tsdb saving time series as binary data instead of json for improving space complexity notes we plan to stick to to alpha release however alpha or alpha should have dashboard please feel free to comment and update this board
| 1
|
581,466
| 17,294,227,392
|
IssuesEvent
|
2021-07-25 11:49:27
|
BlueBubblesApp/BlueBubbles-Android-App
|
https://api.github.com/repos/BlueBubblesApp/BlueBubbles-Android-App
|
opened
|
Fix grey screen bug in server settings
|
Bug Difficulty: Easy Difficulty: Medium UX priority: high
|
Not sure why this happened/happens. Joel said it happened for him when he couldn't connect to the server.
The screenshot Joal shared suggests the issue is due to the socket stream/stream builder widget we use. Just because that's where the grey box starts.
|
1.0
|
Fix grey screen bug in server settings - Not sure why this happened/happens. Joel said it happened for him when he couldn't connect to the server.
The screenshot Joal shared suggests the issue is due to the socket stream/stream builder widget we use. Just because that's where the grey box starts.
|
priority
|
fix grey screen bug in server settings not sure why this happened happens joel said it happened for him when he couldn t connect to the server the screenshot joal shared suggests the issue is due to the socket stream stream builder widget we use just because that s where the grey box starts
| 1
|
738,052
| 25,543,317,690
|
IssuesEvent
|
2022-11-29 16:47:20
|
Canadian-Geospatial-Platform/geoview
|
https://api.github.com/repos/Canadian-Geospatial-Platform/geoview
|
opened
|
WMS layer URL error
|
bug-type: broken use case priority: high
|
https://geo.weather.gc.ca/geomet?lang=en&service=WMS&request=GetCapabilities&layers=REPS.DIAG.6_PRMM.ERGE10
This url should be abler to load but the layer stepper says it is not valid
|
1.0
|
WMS layer URL error - https://geo.weather.gc.ca/geomet?lang=en&service=WMS&request=GetCapabilities&layers=REPS.DIAG.6_PRMM.ERGE10
This url should be abler to load but the layer stepper says it is not valid
|
priority
|
wms layer url error this url should be abler to load but the layer stepper says it is not valid
| 1
|
559,093
| 16,549,849,444
|
IssuesEvent
|
2021-05-28 07:17:13
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
Memory leak when replacing project instance
|
bug high-priority premium resolved
|
https://www.bryntum.com/forum/viewtopic.php?p=84601#p84601
https://www.bryntum.com/forum/viewtopic.php?p=84602#p84602
http://lh/bryntum-suite/scheduler/examples/bigdataset/
Take a snapshot
<img width="1989" alt="Снимок экрана 2021-03-26 в 14 08 24" src="https://user-images.githubusercontent.com/57486733/112624121-3bf59980-8e3e-11eb-9843-9e1d3ff67ebf.png">
Click 10K and take a snapshot. See 11K in Memory
<img width="1986" alt="Снимок экрана 2021-03-26 в 14 10 05" src="https://user-images.githubusercontent.com/57486733/112624129-3ef08a00-8e3e-11eb-83c7-621009751da3.png">
Click about 10 times 5K and 10K. Take a snapshot. See all records are in memory.
<img width="1988" alt="Снимок экрана 2021-03-26 в 14 11 44" src="https://user-images.githubusercontent.com/57486733/112624126-3e57f380-8e3e-11eb-8733-0c9d1a12d844.png">
|
1.0
|
Memory leak when replacing project instance - https://www.bryntum.com/forum/viewtopic.php?p=84601#p84601
https://www.bryntum.com/forum/viewtopic.php?p=84602#p84602
http://lh/bryntum-suite/scheduler/examples/bigdataset/
Take a snapshot
<img width="1989" alt="Снимок экрана 2021-03-26 в 14 08 24" src="https://user-images.githubusercontent.com/57486733/112624121-3bf59980-8e3e-11eb-9843-9e1d3ff67ebf.png">
Click 10K and take a snapshot. See 11K in Memory
<img width="1986" alt="Снимок экрана 2021-03-26 в 14 10 05" src="https://user-images.githubusercontent.com/57486733/112624129-3ef08a00-8e3e-11eb-83c7-621009751da3.png">
Click about 10 times 5K and 10K. Take a snapshot. See all records are in memory.
<img width="1988" alt="Снимок экрана 2021-03-26 в 14 11 44" src="https://user-images.githubusercontent.com/57486733/112624126-3e57f380-8e3e-11eb-8733-0c9d1a12d844.png">
|
priority
|
memory leak when replacing project instance take a snapshot img width alt снимок экрана в src click and take a snapshot see in memory img width alt снимок экрана в src click about times and take a snapshot see all records are in memory img width alt снимок экрана в src
| 1
|
216,688
| 7,310,947,580
|
IssuesEvent
|
2018-02-28 16:21:23
|
getcanoe/canoe
|
https://api.github.com/repos/getcanoe/canoe
|
opened
|
Remove 'Import Wallet' Page from Behind The Password Wall
|
Priority: High
|
For users to be able to recover their wallets if they forgot their passwords.
|
1.0
|
Remove 'Import Wallet' Page from Behind The Password Wall - For users to be able to recover their wallets if they forgot their passwords.
|
priority
|
remove import wallet page from behind the password wall for users to be able to recover their wallets if they forgot their passwords
| 1
|
168,167
| 6,363,382,632
|
IssuesEvent
|
2017-07-31 17:16:47
|
robertgarrigos/ubercart
|
https://api.github.com/repos/robertgarrigos/ubercart
|
closed
|
Replace entity_flush_caches() function
|
priority - high type - bug
|
entity_flush_caches() function does not exist in backdrop anymore. What would be the best solution? Is there a backdrop function which could be used to replace entity_flush_caches()?
|
1.0
|
Replace entity_flush_caches() function - entity_flush_caches() function does not exist in backdrop anymore. What would be the best solution? Is there a backdrop function which could be used to replace entity_flush_caches()?
|
priority
|
replace entity flush caches function entity flush caches function does not exist in backdrop anymore what would be the best solution is there a backdrop function which could be used to replace entity flush caches
| 1
|
731,456
| 25,216,870,035
|
IssuesEvent
|
2022-11-14 09:49:00
|
ballerina-platform/ballerina-dev-website
|
https://api.github.com/repos/ballerina-platform/ballerina-dev-website
|
closed
|
Add the Pilcrow Sign to the Topics on the Home Page
|
Priority/High Area/UIUX Type/Improvement Area/CommonPages
|
**Description:**
Add the pilcrow sign to the topics on the home page.
**Related website/documentation area**
<!--Add one of the following: `Area/BBEs`, `Area/HomePageSamples`, `Area/LearnPages`, `Area/Blog`, `Area/CommonPages`,` Area/Backend`, `Area/UIUX`, and `Area/Workflows` -->
**Describe the problem(s)**
**Describe your solution(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
Add the Pilcrow Sign to the Topics on the Home Page - **Description:**
Add the pilcrow sign to the topics on the home page.
**Related website/documentation area**
<!--Add one of the following: `Area/BBEs`, `Area/HomePageSamples`, `Area/LearnPages`, `Area/Blog`, `Area/CommonPages`,` Area/Backend`, `Area/UIUX`, and `Area/Workflows` -->
**Describe the problem(s)**
**Describe your solution(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
priority
|
add the pilcrow sign to the topics on the home page description add the pilcrow sign to the topics on the home page related website documentation area describe the problem s describe your solution s related issues optional suggested labels optional suggested assignees optional
| 1
|
668,022
| 22,549,233,863
|
IssuesEvent
|
2022-06-27 02:27:52
|
solunareclipse1/ExtraPets2
|
https://api.github.com/repos/solunareclipse1/ExtraPets2
|
closed
|
biometric security on overseer rifle desync
|
bug high priority
|
no idea why. can cause fake damage to other players for clients. i hate networking
|
1.0
|
biometric security on overseer rifle desync - no idea why. can cause fake damage to other players for clients. i hate networking
|
priority
|
biometric security on overseer rifle desync no idea why can cause fake damage to other players for clients i hate networking
| 1
|
112,261
| 4,514,689,942
|
IssuesEvent
|
2016-09-05 01:03:42
|
idevelopment/RingMe
|
https://api.github.com/repos/idevelopment/RingMe
|
opened
|
Staff registration failed
|
bug High Priority
|
when i fill in the registration form and press save, it just redirects me to the registration form.
also the user is not added to the database.
|
1.0
|
Staff registration failed - when i fill in the registration form and press save, it just redirects me to the registration form.
also the user is not added to the database.
|
priority
|
staff registration failed when i fill in the registration form and press save it just redirects me to the registration form also the user is not added to the database
| 1
|
141,981
| 5,447,880,271
|
IssuesEvent
|
2017-03-07 14:40:42
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
closed
|
Public Schedule Calendar View: Decrease width of header to 200px
|
enhancement Priority: High scheduling
|
Let's save some space to show more info on the screen. Please decrease the header of the table in the public schedule calendar view to 200px width.

|
1.0
|
Public Schedule Calendar View: Decrease width of header to 200px - Let's save some space to show more info on the screen. Please decrease the header of the table in the public schedule calendar view to 200px width.

|
priority
|
public schedule calendar view decrease width of header to let s save some space to show more info on the screen please decrease the header of the table in the public schedule calendar view to width
| 1
|
353,441
| 10,552,779,831
|
IssuesEvent
|
2019-10-03 15:48:24
|
kiwicom/schemathesis
|
https://api.github.com/repos/kiwicom/schemathesis
|
closed
|
Runner. Provide a way to setup authorization
|
Priority: High Type: Enhancement
|
At the moment there is no way to setup auth in `runner._execute_all_tests`. But it should be configurable.
We can start with these options:
- basic auth (for CLI we can follow cURL convention `--user` option)
- custom header (we can add again, a cURL convention `--header`) so users can add auth header manually
As the next step, it would be nice to have more flexible code in `runner` via e.g. passing arguments to the `requests.Session` or auth callbacks to solve the case when a token expires (and it should be refreshed)
|
1.0
|
Runner. Provide a way to setup authorization - At the moment there is no way to setup auth in `runner._execute_all_tests`. But it should be configurable.
We can start with these options:
- basic auth (for CLI we can follow cURL convention `--user` option)
- custom header (we can add again, a cURL convention `--header`) so users can add auth header manually
As the next step, it would be nice to have more flexible code in `runner` via e.g. passing arguments to the `requests.Session` or auth callbacks to solve the case when a token expires (and it should be refreshed)
|
priority
|
runner provide a way to setup authorization at the moment there is no way to setup auth in runner execute all tests but it should be configurable we can start with these options basic auth for cli we can follow curl convention user option custom header we can add again a curl convention header so users can add auth header manually as the next step it would be nice to have more flexible code in runner via e g passing arguments to the requests session or auth callbacks to solve the case when a token expires and it should be refreshed
| 1
|
75,805
| 3,476,154,828
|
IssuesEvent
|
2015-12-26 14:59:57
|
mlhwang/monsterappetite
|
https://api.github.com/repos/mlhwang/monsterappetite
|
closed
|
Condition button's actions
|
Priority - High Snackazon _bug
|
Even without a snack selection you can click on the "Made my choice" button and move on to the next page. This should not happen. It is REQUIRED to make a snack selection.
|
1.0
|
Condition button's actions - Even without a snack selection you can click on the "Made my choice" button and move on to the next page. This should not happen. It is REQUIRED to make a snack selection.
|
priority
|
condition button s actions even without a snack selection you can click on the made my choice button and move on to the next page this should not happen it is required to make a snack selection
| 1
|
249,487
| 7,962,458,268
|
IssuesEvent
|
2018-07-13 14:22:36
|
PushTracker/EvalApp
|
https://api.github.com/repos/PushTracker/EvalApp
|
closed
|
in iOS, ActionBar shifts up after a navigation back to home occurs from a view that experienced an orientation change.
|
bug high-priority ios
|
Changing orientation fixes the ActionBar back to normal position.
|
1.0
|
in iOS, ActionBar shifts up after a navigation back to home occurs from a view that experienced an orientation change. - Changing orientation fixes the ActionBar back to normal position.
|
priority
|
in ios actionbar shifts up after a navigation back to home occurs from a view that experienced an orientation change changing orientation fixes the actionbar back to normal position
| 1
|
636,753
| 20,608,170,294
|
IssuesEvent
|
2022-03-07 04:32:29
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
opened
|
Improve image title and tooltips
|
help wanted area: general UI priority: high area: message feed display
|
In the image lightbox, the title we currently display in the upper left is: `Download <file name> <image name>`. This looks quite odd/confusing, especially in the pretty common case when the file name and image name are the same (i.e. the user didn't rename the image after uploading):

To address this, we should do the following.
## Lightbox title:
1. Remove the word "Download".
2. If the image name is not empty, show just the image name, and not the file name.
3. If the image name is empty, show just the file name.
## Tooltip on lightbox title:
1. Change to a tippy tooltip
2. Update content to say:
```
<image name>
File name: <file name>
```
## Tooltip on image preview:
(This is currently the same as the tooltip in the lightbox.)
1. Change to a tippy tooltip. (We may want to play with the delay if it feels too invasive.)
2. Update content to say: `View or download <image title>`
[CZO discussion thread](https://chat.zulip.org/#narrow/stream/101-design/topic/image.20title.20in.20lightbox)
|
1.0
|
Improve image title and tooltips - In the image lightbox, the title we currently display in the upper left is: `Download <file name> <image name>`. This looks quite odd/confusing, especially in the pretty common case when the file name and image name are the same (i.e. the user didn't rename the image after uploading):

To address this, we should do the following.
## Lightbox title:
1. Remove the word "Download".
2. If the image name is not empty, show just the image name, and not the file name.
3. If the image name is empty, show just the file name.
## Tooltip on lightbox title:
1. Change to a tippy tooltip
2. Update content to say:
```
<image name>
File name: <file name>
```
## Tooltip on image preview:
(This is currently the same as the tooltip in the lightbox.)
1. Change to a tippy tooltip. (We may want to play with the delay if it feels too invasive.)
2. Update content to say: `View or download <image title>`
[CZO discussion thread](https://chat.zulip.org/#narrow/stream/101-design/topic/image.20title.20in.20lightbox)
|
priority
|
improve image title and tooltips in the image lightbox the title we currently display in the upper left is download this looks quite odd confusing especially in the pretty common case when the file name and image name are the same i e the user didn t rename the image after uploading to address this we should do the following lightbox title remove the word download if the image name is not empty show just the image name and not the file name if the image name is empty show just the file name tooltip on lightbox title change to a tippy tooltip update content to say file name tooltip on image preview this is currently the same as the tooltip in the lightbox change to a tippy tooltip we may want to play with the delay if it feels too invasive update content to say view or download
| 1
|
707,112
| 24,295,756,512
|
IssuesEvent
|
2022-09-29 09:54:41
|
bigbio/pmultiqc
|
https://api.github.com/repos/bigbio/pmultiqc
|
closed
|
Delta mass plot labelling wrong
|
bug high-priority
|
In the beginning, "count" is selected but "frequency" is shown.
Tooltips always show "frequency".
|
1.0
|
Delta mass plot labelling wrong - In the beginning, "count" is selected but "frequency" is shown.
Tooltips always show "frequency".
|
priority
|
delta mass plot labelling wrong in the beginning count is selected but frequency is shown tooltips always show frequency
| 1
|
41,556
| 2,869,060,625
|
IssuesEvent
|
2015-06-05 23:00:52
|
dart-lang/observe
|
https://api.github.com/repos/dart-lang/observe
|
closed
|
reconcile package:observe and observe-js API
|
Area-Polymer enhancement Fixed Priority-High
|
<a href="https://github.com/jmesserly"><img src="https://avatars.githubusercontent.com/u/1081711?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jmesserly](https://github.com/jmesserly)**
_Originally opened as dart-lang/sdk#13554_
----
Now that Polymer.js includes observe-js (it was previously just a set of utilities at https://github.com/rafaelw/ChangeSummary), it would be nice to align APIs with Dart's package:observe where it makes sense.
\* PathObserver is in pretty good shape.
\* CompoundBinding should be CompoundPathObserver, refactor APIs
\* ListPathObserver should be ListReduction and get the reduceFn.
\* add/expose: Observer, ListObserver, ObjectObserver
\* expose: ListSplice
\* possibly expose Path... but needs a new name.
|
1.0
|
reconcile package:observe and observe-js API - <a href="https://github.com/jmesserly"><img src="https://avatars.githubusercontent.com/u/1081711?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jmesserly](https://github.com/jmesserly)**
_Originally opened as dart-lang/sdk#13554_
----
Now that Polymer.js includes observe-js (it was previously just a set of utilities at https://github.com/rafaelw/ChangeSummary), it would be nice to align APIs with Dart's package:observe where it makes sense.
\* PathObserver is in pretty good shape.
\* CompoundBinding should be CompoundPathObserver, refactor APIs
\* ListPathObserver should be ListReduction and get the reduceFn.
\* add/expose: Observer, ListObserver, ObjectObserver
\* expose: ListSplice
\* possibly expose Path... but needs a new name.
|
priority
|
reconcile package observe and observe js api issue by originally opened as dart lang sdk now that polymer js includes observe js it was previously just a set of utilities at it would be nice to align apis with dart s package observe where it makes sense pathobserver is in pretty good shape compoundbinding should be compoundpathobserver refactor apis listpathobserver should be listreduction and get the reducefn add expose observer listobserver objectobserver expose listsplice possibly expose path but needs a new name
| 1
|
714,129
| 24,551,922,892
|
IssuesEvent
|
2022-10-12 13:14:43
|
owncloud/web
|
https://api.github.com/repos/owncloud/web
|
closed
|
Shared via link > "parent folder" shows "Resource not found" on click
|
Type:Bug Priority:p2-high GA-Blocker
|
### Steps to reproduce
1. receive a share
2. go to "shares" > "shared with me" > create a link for the with-you-shared file
3. go to "shares" > "shared via link" > click on the parent folder of the shared resource
4. View shows "Resource not found"
https://user-images.githubusercontent.com/26610733/181061204-6d205bdf-f99c-486a-b65b-cc2e813cad9a.mp4
### Expected behaviour
show "Shares" > "Shared with me" > "highlight resource"
### Actual behaviour
View shows "Resource not found"
|
1.0
|
Shared via link > "parent folder" shows "Resource not found" on click -
### Steps to reproduce
1. receive a share
2. go to "shares" > "shared with me" > create a link for the with-you-shared file
3. go to "shares" > "shared via link" > click on the parent folder of the shared resource
4. View shows "Resource not found"
https://user-images.githubusercontent.com/26610733/181061204-6d205bdf-f99c-486a-b65b-cc2e813cad9a.mp4
### Expected behaviour
show "Shares" > "Shared with me" > "highlight resource"
### Actual behaviour
View shows "Resource not found"
|
priority
|
shared via link parent folder shows resource not found on click steps to reproduce receive a share go to shares shared with me create a link for the with you shared file go to shares shared via link click on the parent folder of the shared resource view shows resource not found expected behaviour show shares shared with me highlight resource actual behaviour view shows resource not found
| 1
|
5,795
| 2,579,760,544
|
IssuesEvent
|
2015-02-13 13:08:33
|
pufexi/multiorder
|
https://api.github.com/repos/pufexi/multiorder
|
opened
|
Zrusit ID obj pri zadavani selectboxem
|
easy task :-) high priority
|
Je supr, ze jde nyni zadavat zbozi pomoci ID, ale v tom selectboxu nakonec to ID at neni, protoze pak pujde u selectboxu zadavat zbozi napsanim prvniho pismena zbozi na klavesnici, v pripade, ze zadavatel nezna ID, tak nemusi rolovat jak blazen.
Zaroven by bylo fajn, kdyby ENTER na pridani objednavky fungoval i v SELECTBOXU, nyni funguje jen v tech 2 inputech. Pokud je to nejak zbytecne narocne, asi neres.

|
1.0
|
Zrusit ID obj pri zadavani selectboxem - Je supr, ze jde nyni zadavat zbozi pomoci ID, ale v tom selectboxu nakonec to ID at neni, protoze pak pujde u selectboxu zadavat zbozi napsanim prvniho pismena zbozi na klavesnici, v pripade, ze zadavatel nezna ID, tak nemusi rolovat jak blazen.
Zaroven by bylo fajn, kdyby ENTER na pridani objednavky fungoval i v SELECTBOXU, nyni funguje jen v tech 2 inputech. Pokud je to nejak zbytecne narocne, asi neres.

|
priority
|
zrusit id obj pri zadavani selectboxem je supr ze jde nyni zadavat zbozi pomoci id ale v tom selectboxu nakonec to id at neni protoze pak pujde u selectboxu zadavat zbozi napsanim prvniho pismena zbozi na klavesnici v pripade ze zadavatel nezna id tak nemusi rolovat jak blazen zaroven by bylo fajn kdyby enter na pridani objednavky fungoval i v selectboxu nyni funguje jen v tech inputech pokud je to nejak zbytecne narocne asi neres
| 1
|
347,600
| 10,431,897,447
|
IssuesEvent
|
2019-09-17 10:02:44
|
OpenSRP/opensrp-client-chw-anc
|
https://api.github.com/repos/OpenSRP/opensrp-client-chw-anc
|
closed
|
Birth vaccines should not appear as a PNC task if birth vaccines are recorded in Pregnancy Outcome form
|
High Priority PNC bug
|
This was reported by the client.
Steps to reproduce:
- Open a record for an ANC woman and open the Pregnancy Outcome form
- Record the outcome is a Live Birth, and at the bottom of the form, record that the two birth vaccines, BCG and OPV 0, were given (enter a date for each vaccine)
- (Note: make sure you put the delivery date as 3 days ago, so that the first PNC home visit shows up)
- Save the form, then open the PNC profile for the woman and open the PNC visit form
- The birth vaccines are shown as a task in the PNC home visit, when they shouldn't be, because the birth vaccines were already recorded.
To pass QA:
- [x] The above steps should result in no birth vaccine task in the first PNC home visit
- [x] This should work in both English and French versions
|
1.0
|
Birth vaccines should not appear as a PNC task if birth vaccines are recorded in Pregnancy Outcome form - This was reported by the client.
Steps to reproduce:
- Open a record for an ANC woman and open the Pregnancy Outcome form
- Record the outcome is a Live Birth, and at the bottom of the form, record that the two birth vaccines, BCG and OPV 0, were given (enter a date for each vaccine)
- (Note: make sure you put the delivery date as 3 days ago, so that the first PNC home visit shows up)
- Save the form, then open the PNC profile for the woman and open the PNC visit form
- The birth vaccines are shown as a task in the PNC home visit, when they shouldn't be, because the birth vaccines were already recorded.
To pass QA:
- [x] The above steps should result in no birth vaccine task in the first PNC home visit
- [x] This should work in both English and French versions
|
priority
|
birth vaccines should not appear as a pnc task if birth vaccines are recorded in pregnancy outcome form this was reported by the client steps to reproduce open a record for an anc woman and open the pregnancy outcome form record the outcome is a live birth and at the bottom of the form record that the two birth vaccines bcg and opv were given enter a date for each vaccine note make sure you put the delivery date as days ago so that the first pnc home visit shows up save the form then open the pnc profile for the woman and open the pnc visit form the birth vaccines are shown as a task in the pnc home visit when they shouldn t be because the birth vaccines were already recorded to pass qa the above steps should result in no birth vaccine task in the first pnc home visit this should work in both english and french versions
| 1
|
351,324
| 10,515,687,738
|
IssuesEvent
|
2019-09-28 11:59:47
|
HW-PlayersPatch/Development
|
https://api.github.com/repos/HW-PlayersPatch/Development
|
closed
|
SP Voiceactor Code
|
Priority1: High Status3: Actionable Type2: Bug Type4: Campaign
|
# commands.lua / SpeechRaceHelper
-race.lua is written by SpeechRaceHelper(), which is currently only ran by MP, not SP.
-commands.lua first looks for race.lua (SP and MP), if it doesn't exist it uses the default table written in commands.lua.
-The problem is any other mod could also write to race.lua with different races. Then the user playing SP would have screwed up Audio if the races are in a different order (I tested on 9/28 and confirmed). To fix, every SP mission MUST call SpeechRaceHelper(). Or some other solution.
# Extra Races (skip)
Maybe make dual command and observer races filtered out from SP via tags or whatever it is. I was thinking SP only races are filtered out from MP, but maybe not per the logs below?
HwRM.log SP:
Race Filtering: SINGLEPLAYER rules - @SinglePlayer
17 Races Discovered
HwRM.log MP:
17 Races Discovered
Race Filtering: DEATHMATCH rules - @Deathmatch,Extras
Edit: ya forget the extra races thing, no filtering outside of the MP gamemodes. SP only loads the races needed for the mission so no baggage.
|
1.0
|
SP Voiceactor Code - # commands.lua / SpeechRaceHelper
-race.lua is written by SpeechRaceHelper(), which is currently only ran by MP, not SP.
-commands.lua first looks for race.lua (SP and MP), if it doesn't exist it uses the default table written in commands.lua.
-The problem is any other mod could also write to race.lua with different races. Then the user playing SP would have screwed up Audio if the races are in a different order (I tested on 9/28 and confirmed). To fix, every SP mission MUST call SpeechRaceHelper(). Or some other solution.
# Extra Races (skip)
Maybe make dual command and observer races filtered out from SP via tags or whatever it is. I was thinking SP only races are filtered out from MP, but maybe not per the logs below?
HwRM.log SP:
Race Filtering: SINGLEPLAYER rules - @SinglePlayer
17 Races Discovered
HwRM.log MP:
17 Races Discovered
Race Filtering: DEATHMATCH rules - @Deathmatch,Extras
Edit: ya forget the extra races thing, no filtering outside of the MP gamemodes. SP only loads the races needed for the mission so no baggage.
|
priority
|
sp voiceactor code commands lua speechracehelper race lua is written by speechracehelper which is currently only ran by mp not sp commands lua first looks for race lua sp and mp if it doesn t exist it uses the default table written in commands lua the problem is any other mod could also write to race lua with different races then the user playing sp would have screwed up audio if the races are in a different order i tested on and confirmed to fix every sp mission must call speechracehelper or some other solution extra races skip maybe make dual command and observer races filtered out from sp via tags or whatever it is i was thinking sp only races are filtered out from mp but maybe not per the logs below hwrm log sp race filtering singleplayer rules singleplayer races discovered hwrm log mp races discovered race filtering deathmatch rules deathmatch extras edit ya forget the extra races thing no filtering outside of the mp gamemodes sp only loads the races needed for the mission so no baggage
| 1
|
424,372
| 12,309,849,170
|
IssuesEvent
|
2020-05-12 09:37:33
|
geocollections/sarv-edit
|
https://api.github.com/repos/geocollections/sarv-edit
|
closed
|
Cannot edit taxon_list records and other issues
|
HIGH PRIORITY bug
|
In sample view in taxon subform user cannot edit records with error 'cant change field preparation'.
Preparation field is correct in config, but preparation_number should be removed from app - it is not in the models. Example record: https://edit2.geocollections.info/sample/174106
- In autocomplete field the selection opens behind popup and cannot be seen - this is recent bug affecting all forms.
- Taxon field is not required, but currently edit button is disabled if taxon is empty - should be possible to save record without related taxon.
- Also, taxon and taxon_txt fields should be on separate rows and take full popup width.
|
1.0
|
Cannot edit taxon_list records and other issues - In sample view in taxon subform user cannot edit records with error 'cant change field preparation'.
Preparation field is correct in config, but preparation_number should be removed from app - it is not in the models. Example record: https://edit2.geocollections.info/sample/174106
- In autocomplete field the selection opens behind popup and cannot be seen - this is recent bug affecting all forms.
- Taxon field is not required, but currently edit button is disabled if taxon is empty - should be possible to save record without related taxon.
- Also, taxon and taxon_txt fields should be on separate rows and take full popup width.
|
priority
|
cannot edit taxon list records and other issues in sample view in taxon subform user cannot edit records with error cant change field preparation preparation field is correct in config but preparation number should be removed from app it is not in the models example record in autocomplete field the selection opens behind popup and cannot be seen this is recent bug affecting all forms taxon field is not required but currently edit button is disabled if taxon is empty should be possible to save record without related taxon also taxon and taxon txt fields should be on separate rows and take full popup width
| 1
|
150,077
| 5,735,949,591
|
IssuesEvent
|
2017-04-22 03:13:28
|
Angblah/Comparator
|
https://api.github.com/repos/Angblah/Comparator
|
closed
|
Adding/Deleting Item/Attribute UI
|
Priority: High Stack: UI Status: Completed Type: Feature
|
Updating the UI for adding and deleting things from the workspace.
|
1.0
|
Adding/Deleting Item/Attribute UI - Updating the UI for adding and deleting things from the workspace.
|
priority
|
adding deleting item attribute ui updating the ui for adding and deleting things from the workspace
| 1
|
539,003
| 15,782,041,919
|
IssuesEvent
|
2021-04-01 12:15:47
|
michaelrsweet/htmldoc
|
https://api.github.com/repos/michaelrsweet/htmldoc
|
closed
|
AddressSanitizer: heap-buffer-overflow on render_table_row() ps-pdf.cxx:6123:34
|
bug priority-high
|
Hello, While fuzzing htmldoc , I found a heap-buffer-overflow in the render_table_row() ps-pdf.cxx:6123:34
- test platform
htmldoc Version 1.9.12 git [master 6898d0a]
OS :Ubuntu 20.04.1 LTS x86_64
kernel: 5.4.0-53-generic
compiler: clang version 10.0.0-4ubuntu1
reproduced:
htmldoc -f demo.pdf poc7.html
poc(zipped for update):
[poc7.zip](https://github.com/michaelrsweet/htmldoc/files/5872217/poc7.zip)
```
=================================================================
==38248==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x625000002100 at pc 0x00000059260e bp 0x7fffa3362670 sp 0x7fffa3362668
READ of size 8 at 0x625000002100 thread T0
#0 0x59260d in render_table_row(hdtable_t&, tree_str***, int, unsigned char*, float, float, float, float, float*, float*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:6123:34
#1 0x588630 in parse_table(tree_str*, float, float, float, float, float*, float*, int*, int) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:7081:5
#2 0x558013 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4167:11
#3 0x556c54 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4081:9
#4 0x556c54 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4081:9
#5 0x54f90e in pspdf_export /home//htmldoc_sani/htmldoc/ps-pdf.cxx:803:3
#6 0x53c845 in main /home//htmldoc_sani/htmldoc/htmldoc.cxx:1291:3
#7 0x7f52a6b3e0b2 in __libc_start_main /build/glibc-eX1tMB/glibc-2.31/csu/../csu/libc-start.c:308:16
#8 0x41f8bd in _start (/home//htmldoc_sani/htmldoc/htmldoc+0x41f8bd)
0x625000002100 is located 32 bytes to the right of 8160-byte region [0x625000000100,0x6250000020e0)
allocated by thread T0 here:
#0 0x4eea4e in realloc /home/goushi/work/libfuzzer-workshop/src/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:165
#1 0x55d96b in check_pages(int) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:8804:24
SUMMARY: AddressSanitizer: heap-buffer-overflow /home//htmldoc_sani/htmldoc/ps-pdf.cxx:6123:34 in render_table_row(hdtable_t&, tree_str***, int, unsigned char*, float, float, float, float, float*, float*, int*)
Shadow bytes around the buggy address:
0x0c4a7fff83d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff83e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff83f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff8400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff8410: 00 00 00 00 00 00 00 00 00 00 00 00 fa fa fa fa
=>0x0c4a7fff8420:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8430: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8440: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8450: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8460: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8470: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==38248==ABORTING
```
```
──── source:ps-pdf.cxx+8754 ────
8749 break;
8750 }
8751
8752 if (insert)
8753 {
→ 8754 if (insert->prev)
8755 insert->prev->next = r;
8756 else
8757 pages[page].start = r;
8758
8759 r->prev = insert->prev;
─ threads ────
[#0] Id 1, Name: "htmldoc", stopped 0x415e8c in new_render (), reason: SIGSEGV
── trace ────
[#0] 0x415e8c → new_render(page=0x14, type=0x2, x=0, y=-6.3660128850113321e+24, width=1.2732025770022664e+25, height=6.3660128850113321e+24, data=0x7fffffff6a40, insert=0x682881c800000000)
[#1] 0x4267e2 → render_table_row(table=@0x7fffffff6d98, cells=<optimized out>, row=<optimized out>, height_var=<optimized out>, left=0, right=0, bottom=<optimized out>, top=<optimized out>, x=<optimized out>, y=<optimized out>, page=<optimized out>)
[#2] 0x424519 → parse_table(t=<optimized out>, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=<optimized out>, x=<optimized out>, y=<optimized out>, page=<optimized out>, needspace=<optimized out>)
[#3] 0x4157c0 → parse_doc(t=0x918c20, left=0x7fffffffb6e8, right=0x7fffffffb6e4, bottom=0x7fffffffb6ac, top=<optimized out>, x=<optimized out>, y=0x7fffffffb674, page=0x7fffffffb684, cpara=0x917cc0, needspace=0x7fffffffb6d4)
[#4] 0x414964 → parse_doc(t=0x918390, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=0x7fffffffb69c, x=0x7fffffffb6ec, y=<optimized out>, page=<optimized out>, cpara=<optimized out>, needspace=<optimized out>)
[#5] 0x414964 → parse_doc(t=0x9171d0, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=0x7fffffffb69c, x=0x7fffffffb6ec, y=<optimized out>, page=<optimized out>, cpara=<optimized out>, needspace=<optimized out>)
[#6] 0x411980 → pspdf_export(document=<optimized out>, toc=<optimized out>)
[#7] 0x408e89 → main(argc=<optimized out>, argv=<optimized out>)
──
```
|
1.0
|
AddressSanitizer: heap-buffer-overflow on render_table_row() ps-pdf.cxx:6123:34 - Hello, While fuzzing htmldoc , I found a heap-buffer-overflow in the render_table_row() ps-pdf.cxx:6123:34
- test platform
htmldoc Version 1.9.12 git [master 6898d0a]
OS :Ubuntu 20.04.1 LTS x86_64
kernel: 5.4.0-53-generic
compiler: clang version 10.0.0-4ubuntu1
reproduced:
htmldoc -f demo.pdf poc7.html
poc(zipped for update):
[poc7.zip](https://github.com/michaelrsweet/htmldoc/files/5872217/poc7.zip)
```
=================================================================
==38248==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x625000002100 at pc 0x00000059260e bp 0x7fffa3362670 sp 0x7fffa3362668
READ of size 8 at 0x625000002100 thread T0
#0 0x59260d in render_table_row(hdtable_t&, tree_str***, int, unsigned char*, float, float, float, float, float*, float*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:6123:34
#1 0x588630 in parse_table(tree_str*, float, float, float, float, float*, float*, int*, int) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:7081:5
#2 0x558013 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4167:11
#3 0x556c54 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4081:9
#4 0x556c54 in parse_doc(tree_str*, float*, float*, float*, float*, float*, float*, int*, tree_str*, int*) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:4081:9
#5 0x54f90e in pspdf_export /home//htmldoc_sani/htmldoc/ps-pdf.cxx:803:3
#6 0x53c845 in main /home//htmldoc_sani/htmldoc/htmldoc.cxx:1291:3
#7 0x7f52a6b3e0b2 in __libc_start_main /build/glibc-eX1tMB/glibc-2.31/csu/../csu/libc-start.c:308:16
#8 0x41f8bd in _start (/home//htmldoc_sani/htmldoc/htmldoc+0x41f8bd)
0x625000002100 is located 32 bytes to the right of 8160-byte region [0x625000000100,0x6250000020e0)
allocated by thread T0 here:
#0 0x4eea4e in realloc /home/goushi/work/libfuzzer-workshop/src/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:165
#1 0x55d96b in check_pages(int) /home//htmldoc_sani/htmldoc/ps-pdf.cxx:8804:24
SUMMARY: AddressSanitizer: heap-buffer-overflow /home//htmldoc_sani/htmldoc/ps-pdf.cxx:6123:34 in render_table_row(hdtable_t&, tree_str***, int, unsigned char*, float, float, float, float, float*, float*, int*)
Shadow bytes around the buggy address:
0x0c4a7fff83d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff83e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff83f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff8400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c4a7fff8410: 00 00 00 00 00 00 00 00 00 00 00 00 fa fa fa fa
=>0x0c4a7fff8420:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8430: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8440: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8450: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8460: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c4a7fff8470: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==38248==ABORTING
```
```
──── source:ps-pdf.cxx+8754 ────
8749 break;
8750 }
8751
8752 if (insert)
8753 {
→ 8754 if (insert->prev)
8755 insert->prev->next = r;
8756 else
8757 pages[page].start = r;
8758
8759 r->prev = insert->prev;
─ threads ────
[#0] Id 1, Name: "htmldoc", stopped 0x415e8c in new_render (), reason: SIGSEGV
── trace ────
[#0] 0x415e8c → new_render(page=0x14, type=0x2, x=0, y=-6.3660128850113321e+24, width=1.2732025770022664e+25, height=6.3660128850113321e+24, data=0x7fffffff6a40, insert=0x682881c800000000)
[#1] 0x4267e2 → render_table_row(table=@0x7fffffff6d98, cells=<optimized out>, row=<optimized out>, height_var=<optimized out>, left=0, right=0, bottom=<optimized out>, top=<optimized out>, x=<optimized out>, y=<optimized out>, page=<optimized out>)
[#2] 0x424519 → parse_table(t=<optimized out>, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=<optimized out>, x=<optimized out>, y=<optimized out>, page=<optimized out>, needspace=<optimized out>)
[#3] 0x4157c0 → parse_doc(t=0x918c20, left=0x7fffffffb6e8, right=0x7fffffffb6e4, bottom=0x7fffffffb6ac, top=<optimized out>, x=<optimized out>, y=0x7fffffffb674, page=0x7fffffffb684, cpara=0x917cc0, needspace=0x7fffffffb6d4)
[#4] 0x414964 → parse_doc(t=0x918390, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=0x7fffffffb69c, x=0x7fffffffb6ec, y=<optimized out>, page=<optimized out>, cpara=<optimized out>, needspace=<optimized out>)
[#5] 0x414964 → parse_doc(t=0x9171d0, left=<optimized out>, right=<optimized out>, bottom=<optimized out>, top=0x7fffffffb69c, x=0x7fffffffb6ec, y=<optimized out>, page=<optimized out>, cpara=<optimized out>, needspace=<optimized out>)
[#6] 0x411980 → pspdf_export(document=<optimized out>, toc=<optimized out>)
[#7] 0x408e89 → main(argc=<optimized out>, argv=<optimized out>)
──
```
|
priority
|
addresssanitizer heap buffer overflow on render table row ps pdf cxx hello while fuzzing htmldoc i found a heap buffer overflow in the render table row ps pdf cxx test platform htmldoc version git os ubuntu lts kernel generic compiler clang version reproduced htmldoc f demo pdf html poc zipped for update error addresssanitizer heap buffer overflow on address at pc bp sp read of size at thread in render table row hdtable t tree str int unsigned char float float float float float float int home htmldoc sani htmldoc ps pdf cxx in parse table tree str float float float float float float int int home htmldoc sani htmldoc ps pdf cxx in parse doc tree str float float float float float float int tree str int home htmldoc sani htmldoc ps pdf cxx in parse doc tree str float float float float float float int tree str int home htmldoc sani htmldoc ps pdf cxx in parse doc tree str float float float float float float int tree str int home htmldoc sani htmldoc ps pdf cxx in pspdf export home htmldoc sani htmldoc ps pdf cxx in main home htmldoc sani htmldoc htmldoc cxx in libc start main build glibc glibc csu csu libc start c in start home htmldoc sani htmldoc htmldoc is located bytes to the right of byte region allocated by thread here in realloc home goushi work libfuzzer workshop src llvm projects compiler rt lib asan asan malloc linux cc in check pages int home htmldoc sani htmldoc ps pdf cxx summary addresssanitizer heap buffer overflow home htmldoc sani htmldoc ps pdf cxx in render table row hdtable t tree str int unsigned char float float float float float float int shadow bytes around the buggy address fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa freed heap region fd stack left redzone stack mid redzone stack right redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb shadow gap cc aborting ──── source ps pdf cxx ──── break if insert → if insert prev insert prev next r else pages start r r prev insert prev ─ threads ──── id name htmldoc stopped in new render reason sigsegv ── trace ──── → new render page type x y width height data insert → render table row table cells row height var left right bottom top x y page → parse table t left right bottom top x y page needspace → parse doc t left right bottom top x y page cpara needspace → parse doc t left right bottom top x y page cpara needspace → parse doc t left right bottom top x y page cpara needspace → pspdf export document toc → main argc argv ──
| 1
|
725,856
| 24,978,227,531
|
IssuesEvent
|
2022-11-02 09:38:05
|
rmlockwood/FLExTrans
|
https://api.github.com/repos/rmlockwood/FLExTrans
|
closed
|
[Setttings] The settings tool picks up the vernacular title instead of the analysis lang. title
|
bug high priority
|
I selected a text with the Setting tool which was a vernacular title (because that's what the Settings tool presented, even though an English title was there). When I go to another tool like the sense linker, it didn't recognize the text name. This is presumably because it is only looking a analysis system titles.
|
1.0
|
[Setttings] The settings tool picks up the vernacular title instead of the analysis lang. title - I selected a text with the Setting tool which was a vernacular title (because that's what the Settings tool presented, even though an English title was there). When I go to another tool like the sense linker, it didn't recognize the text name. This is presumably because it is only looking a analysis system titles.
|
priority
|
the settings tool picks up the vernacular title instead of the analysis lang title i selected a text with the setting tool which was a vernacular title because that s what the settings tool presented even though an english title was there when i go to another tool like the sense linker it didn t recognize the text name this is presumably because it is only looking a analysis system titles
| 1
|
47,503
| 2,981,580,907
|
IssuesEvent
|
2015-07-17 02:45:58
|
cyberbit/modation
|
https://api.github.com/repos/cyberbit/modation
|
closed
|
Update Modation to new Soundation look
|
bug high priority inwork
|
Currently super broken due to move to HTTPS and everything is changed and I'm scared. :fearful:
|
1.0
|
Update Modation to new Soundation look - Currently super broken due to move to HTTPS and everything is changed and I'm scared. :fearful:
|
priority
|
update modation to new soundation look currently super broken due to move to https and everything is changed and i m scared fearful
| 1
|
641,485
| 20,827,903,186
|
IssuesEvent
|
2022-03-19 00:56:27
|
iceBear67/AstralFlow
|
https://api.github.com/repos/iceBear67/AstralFlow
|
closed
|
Introduce `ItemKey`
|
enhancement good first issue priority: high
|
Creating a item by a String is obviously unsafe and there is a better solution.
Introduce `interface ItemKey`, provideing `getNamespace()` and `getId()` , and encourage users to use it with a `enum`. (enum XX implements ItemKey)
Also provide a helper method for simple usages, such as `ItemKeys.from("NAMESPACE:id")`
|
1.0
|
Introduce `ItemKey` - Creating a item by a String is obviously unsafe and there is a better solution.
Introduce `interface ItemKey`, provideing `getNamespace()` and `getId()` , and encourage users to use it with a `enum`. (enum XX implements ItemKey)
Also provide a helper method for simple usages, such as `ItemKeys.from("NAMESPACE:id")`
|
priority
|
introduce itemkey creating a item by a string is obviously unsafe and there is a better solution introduce interface itemkey provideing getnamespace and getid and encourage users to use it with a enum enum xx implements itemkey also provide a helper method for simple usages such as itemkeys from namespace id
| 1
|
123,915
| 4,882,500,124
|
IssuesEvent
|
2016-11-17 09:39:20
|
Gouravmoy/Bohr
|
https://api.github.com/repos/Gouravmoy/Bohr
|
closed
|
Data generation logic for fields of type date has performance issue
|
bug Priority High
|
Taking around 80 sec to generate 100000 records.
|
1.0
|
Data generation logic for fields of type date has performance issue - Taking around 80 sec to generate 100000 records.
|
priority
|
data generation logic for fields of type date has performance issue taking around sec to generate records
| 1
|
285,440
| 8,759,001,503
|
IssuesEvent
|
2018-12-15 11:20:24
|
estevez-dev/ha_client
|
https://api.github.com/repos/estevez-dev/ha_client
|
closed
|
Legacy password login not working in HA 0.84.1
|
bug priority: high
|
1213 14:12:09 [Debug] : <!!!> Creating new HomeAssistant instance
1213 14:12:09 [Debug] : AppLifecycleState.inactive
1213 14:12:09 [Debug] : AppLifecycleState.resumed
1213 14:12:09 [Debug] : Use lovelace is true
1213 14:12:09 [Debug] : Socket connecting...
1213 14:12:09 [Error] : Invalid argument(s): 56.0
1213 14:12:09 [Error] : NoSuchMethodError: The getter 'scrollOffsetCorrection' was called on null.
Receiver: null
Tried calling: scrollOffsetCorrection
1213 14:12:09 [Error] : NoSuchMethodError: The getter 'visible' was called on null.
Receiver: null
Tried calling: visible
1213 14:12:09 [Debug] : [Sending] ==> auth request
1213 14:12:09 [Debug] : [Sending] ==> {"id": 1, "type": "subscribe_events", "event_type": "state_changed"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 2, "type": "get_states"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 3, "type": "lovelace/config"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 4, "type": "get_config"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 5, "type": "get_services"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 6, "type": "auth/current_user"}
1213 14:12:09 [Debug] : [Received] <== id:1, error
1213 14:12:09 [Debug] : [Received] <== id:2, error
1213 14:12:09 [Debug] : [Received] <== id:4, success
1213 14:12:09 [Debug] : [Received] <== id:6, error
1213 14:12:09 [Warning] : There was an error getting current user: {id: 6, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
1213 14:12:09 [Debug] : [Received] <== id:3, error
1213 14:12:09 [Error] : There was an error getting Lovelace config: {id: 3, type: result, success: false, error: {code: config_not_found, message: No config found.}}
1213 14:12:09 [Debug] : [Received] <== id:5, success
1213 14:12:15 [Debug] : Settings changed. Saving...
1213 14:12:15 [Debug] : Settings change event: reconnect=true
1213 14:12:15 [Debug] : Use lovelace is false
1213 14:12:15 [Debug] : Socket connecting...
1213 14:12:15 [Debug] : [Sending] ==> auth request
1213 14:12:15 [Debug] : [Sending] ==> {"id": 7, "type": "subscribe_events", "event_type": "state_changed"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 8, "type": "get_states"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 9, "type": "get_config"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 10, "type": "get_services"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 11, "type": "auth/current_user"}
1213 14:12:15 [Debug] : [Received] <== id:7, error
1213 14:12:15 [Debug] : [Received] <== id:8, error
1213 14:12:15 [Debug] : [Received] <== id:9, success
1213 14:12:15 [Debug] : [Received] <== id:11, error
1213 14:12:15 [Warning] : There was an error getting current user: {id: 11, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
1213 14:12:15 [Debug] : [Received] <== id:10, success
1213 14:12:16 [Debug] : Use lovelace is false
1213 14:12:16 [Debug] : [Sending] ==> {"id": 12, "type": "get_states"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 13, "type": "get_config"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 14, "type": "get_services"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 15, "type": "auth/current_user"}
1213 14:12:16 [Debug] : [Received] <== id:12, error
1213 14:12:16 [Debug] : [Received] <== id:13, success
1213 14:12:16 [Debug] : [Received] <== id:14, success
1213 14:12:16 [Debug] : [Received] <== id:15, error
1213 14:12:16 [Warning] : There was an error getting current user: {id: 15, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
|
1.0
|
Legacy password login not working in HA 0.84.1 - 1213 14:12:09 [Debug] : <!!!> Creating new HomeAssistant instance
1213 14:12:09 [Debug] : AppLifecycleState.inactive
1213 14:12:09 [Debug] : AppLifecycleState.resumed
1213 14:12:09 [Debug] : Use lovelace is true
1213 14:12:09 [Debug] : Socket connecting...
1213 14:12:09 [Error] : Invalid argument(s): 56.0
1213 14:12:09 [Error] : NoSuchMethodError: The getter 'scrollOffsetCorrection' was called on null.
Receiver: null
Tried calling: scrollOffsetCorrection
1213 14:12:09 [Error] : NoSuchMethodError: The getter 'visible' was called on null.
Receiver: null
Tried calling: visible
1213 14:12:09 [Debug] : [Sending] ==> auth request
1213 14:12:09 [Debug] : [Sending] ==> {"id": 1, "type": "subscribe_events", "event_type": "state_changed"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 2, "type": "get_states"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 3, "type": "lovelace/config"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 4, "type": "get_config"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 5, "type": "get_services"}
1213 14:12:09 [Debug] : [Sending] ==> {"id": 6, "type": "auth/current_user"}
1213 14:12:09 [Debug] : [Received] <== id:1, error
1213 14:12:09 [Debug] : [Received] <== id:2, error
1213 14:12:09 [Debug] : [Received] <== id:4, success
1213 14:12:09 [Debug] : [Received] <== id:6, error
1213 14:12:09 [Warning] : There was an error getting current user: {id: 6, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
1213 14:12:09 [Debug] : [Received] <== id:3, error
1213 14:12:09 [Error] : There was an error getting Lovelace config: {id: 3, type: result, success: false, error: {code: config_not_found, message: No config found.}}
1213 14:12:09 [Debug] : [Received] <== id:5, success
1213 14:12:15 [Debug] : Settings changed. Saving...
1213 14:12:15 [Debug] : Settings change event: reconnect=true
1213 14:12:15 [Debug] : Use lovelace is false
1213 14:12:15 [Debug] : Socket connecting...
1213 14:12:15 [Debug] : [Sending] ==> auth request
1213 14:12:15 [Debug] : [Sending] ==> {"id": 7, "type": "subscribe_events", "event_type": "state_changed"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 8, "type": "get_states"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 9, "type": "get_config"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 10, "type": "get_services"}
1213 14:12:15 [Debug] : [Sending] ==> {"id": 11, "type": "auth/current_user"}
1213 14:12:15 [Debug] : [Received] <== id:7, error
1213 14:12:15 [Debug] : [Received] <== id:8, error
1213 14:12:15 [Debug] : [Received] <== id:9, success
1213 14:12:15 [Debug] : [Received] <== id:11, error
1213 14:12:15 [Warning] : There was an error getting current user: {id: 11, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
1213 14:12:15 [Debug] : [Received] <== id:10, success
1213 14:12:16 [Debug] : Use lovelace is false
1213 14:12:16 [Debug] : [Sending] ==> {"id": 12, "type": "get_states"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 13, "type": "get_config"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 14, "type": "get_services"}
1213 14:12:16 [Debug] : [Sending] ==> {"id": 15, "type": "auth/current_user"}
1213 14:12:16 [Debug] : [Received] <== id:12, error
1213 14:12:16 [Debug] : [Received] <== id:13, success
1213 14:12:16 [Debug] : [Received] <== id:14, success
1213 14:12:16 [Debug] : [Received] <== id:15, error
1213 14:12:16 [Warning] : There was an error getting current user: {id: 15, type: result, success: false, error: {code: no_user, message: Not authenticated as a user}}
|
priority
|
legacy password login not working in ha creating new homeassistant instance applifecyclestate inactive applifecyclestate resumed use lovelace is true socket connecting invalid argument s nosuchmethoderror the getter scrolloffsetcorrection was called on null receiver null tried calling scrolloffsetcorrection nosuchmethoderror the getter visible was called on null receiver null tried calling visible auth request id type subscribe events event type state changed id type get states id type lovelace config id type get config id type get services id type auth current user id error id error id success id error there was an error getting current user id type result success false error code no user message not authenticated as a user id error there was an error getting lovelace config id type result success false error code config not found message no config found id success settings changed saving settings change event reconnect true use lovelace is false socket connecting auth request id type subscribe events event type state changed id type get states id type get config id type get services id type auth current user id error id error id success id error there was an error getting current user id type result success false error code no user message not authenticated as a user id success use lovelace is false id type get states id type get config id type get services id type auth current user id error id success id success id error there was an error getting current user id type result success false error code no user message not authenticated as a user
| 1
|
460,498
| 13,211,118,264
|
IssuesEvent
|
2020-08-15 20:55:05
|
rstudio/gt
|
https://api.github.com/repos/rstudio/gt
|
opened
|
[accessibility] Use the table <caption> element
|
Difficulty: [3] Advanced Effort: [3] High Priority: [3] High Type: ★ Enhancement
|
Accessible data tables very often have brief descriptive text before or after the table that indicates the content of that table. This text should be associated to the table using the `<caption>` element.
The `<caption>` element needs to be the first thing after the opening `<table>` tag. The `gt()` function should have an argument (`caption`) for the user to insert a caption. If caption text isn’t provided then gt should generate caption text based on the table structure and use that in `<caption>`.
|
1.0
|
[accessibility] Use the table <caption> element - Accessible data tables very often have brief descriptive text before or after the table that indicates the content of that table. This text should be associated to the table using the `<caption>` element.
The `<caption>` element needs to be the first thing after the opening `<table>` tag. The `gt()` function should have an argument (`caption`) for the user to insert a caption. If caption text isn’t provided then gt should generate caption text based on the table structure and use that in `<caption>`.
|
priority
|
use the table element accessible data tables very often have brief descriptive text before or after the table that indicates the content of that table this text should be associated to the table using the element the element needs to be the first thing after the opening tag the gt function should have an argument caption for the user to insert a caption if caption text isn’t provided then gt should generate caption text based on the table structure and use that in
| 1
|
746,306
| 26,025,330,194
|
IssuesEvent
|
2022-12-21 15:52:52
|
AUBGTheHUB/spa-website-2022
|
https://api.github.com/repos/AUBGTheHUB/spa-website-2022
|
opened
|
Clicking on anchors blocks scrolling
|
bug high priority SPA
|
## Brief description:
Clicking on anchors blocks scrolling because the onClick function reverting the body state is triggered only by the closing button.
```javascript
<Button
props={{
css: 'navmobile-button-close',
icon: (
<MdOutlineClose
className={closeButton}
onClick={() => {
setMenuClass('navmobile-menu-backwards');
document.body.style.position = 'static';
document.body.style.overflow = 'auto';
}}
/>
)
}}
/>
```
## How to achieve it:
Put this in a function and pass it to each anchor's onClick hook (HackAUBG included).
```
document.body.style.position = 'static';
document.body.style.overflow = 'auto';
```
|
1.0
|
Clicking on anchors blocks scrolling - ## Brief description:
Clicking on anchors blocks scrolling because the onClick function reverting the body state is triggered only by the closing button.
```javascript
<Button
props={{
css: 'navmobile-button-close',
icon: (
<MdOutlineClose
className={closeButton}
onClick={() => {
setMenuClass('navmobile-menu-backwards');
document.body.style.position = 'static';
document.body.style.overflow = 'auto';
}}
/>
)
}}
/>
```
## How to achieve it:
Put this in a function and pass it to each anchor's onClick hook (HackAUBG included).
```
document.body.style.position = 'static';
document.body.style.overflow = 'auto';
```
|
priority
|
clicking on anchors blocks scrolling brief description clicking on anchors blocks scrolling because the onclick function reverting the body state is triggered only by the closing button javascript button props css navmobile button close icon mdoutlineclose classname closebutton onclick setmenuclass navmobile menu backwards document body style position static document body style overflow auto how to achieve it put this in a function and pass it to each anchor s onclick hook hackaubg included document body style position static document body style overflow auto
| 1
|
490,109
| 14,115,609,985
|
IssuesEvent
|
2020-11-07 21:50:47
|
Cholewka/typescript-generator
|
https://api.github.com/repos/Cholewka/typescript-generator
|
reopened
|
Checkbox clicking error
|
bug enhancement priority:high
|
After clicking a checkbox in `OptionInput` component, a `selectedAnswer` state value is, defaultly, `false` - so when the preset is `true`, it will, after click, send answer which is `!selectedAnswer` => `true`, as well. The checkbox will remain checked, and the answer will be true, but the `selectedAnswer` will pass a wrong data: if `true`, then `false`.
|
1.0
|
Checkbox clicking error - After clicking a checkbox in `OptionInput` component, a `selectedAnswer` state value is, defaultly, `false` - so when the preset is `true`, it will, after click, send answer which is `!selectedAnswer` => `true`, as well. The checkbox will remain checked, and the answer will be true, but the `selectedAnswer` will pass a wrong data: if `true`, then `false`.
|
priority
|
checkbox clicking error after clicking a checkbox in optioninput component a selectedanswer state value is defaultly false so when the preset is true it will after click send answer which is selectedanswer true as well the checkbox will remain checked and the answer will be true but the selectedanswer will pass a wrong data if true then false
| 1
|
2,539
| 2,528,289,112
|
IssuesEvent
|
2015-01-22 01:22:39
|
mmisw/mmiorr
|
https://api.github.com/repos/mmisw/mmiorr
|
closed
|
Support URIs other than the autogenerated one
|
3–5 stars enhancement imported ont portal Priority-High voc2rdf
|
_From [grayb...@mbari.org](https://code.google.com/u/109634240660495836000/) on November 11, 2008 13:39:21_
What steps will reproduce the problem? 1a. User specifies an alternate URI (e.g., a URN,or remote URL) for their term (in vocab or
ontology) or whole ontology OR
1b. User specifies they want a URN, not a URI
2. User submits vocab or ontology
What is the expected output?
A. URI for term is set appropriately, instead of to the default (a: to their value; b: to appropriate
URN)
B. Service to resolve all known URIs knows how to get metadata and data (RDF) for this URI,
whether term or ontology
C. Various other features of the registry have to be modified accordingly. Please use labels and text to provide additional information. a. I think we resolve the default URL, but indicate it is not the resource URI
b. If the URI is a resolvable URL, point to it using some button (?)
...
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=40_
|
1.0
|
Support URIs other than the autogenerated one - _From [grayb...@mbari.org](https://code.google.com/u/109634240660495836000/) on November 11, 2008 13:39:21_
What steps will reproduce the problem? 1a. User specifies an alternate URI (e.g., a URN,or remote URL) for their term (in vocab or
ontology) or whole ontology OR
1b. User specifies they want a URN, not a URI
2. User submits vocab or ontology
What is the expected output?
A. URI for term is set appropriately, instead of to the default (a: to their value; b: to appropriate
URN)
B. Service to resolve all known URIs knows how to get metadata and data (RDF) for this URI,
whether term or ontology
C. Various other features of the registry have to be modified accordingly. Please use labels and text to provide additional information. a. I think we resolve the default URL, but indicate it is not the resource URI
b. If the URI is a resolvable URL, point to it using some button (?)
...
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=40_
|
priority
|
support uris other than the autogenerated one from on november what steps will reproduce the problem user specifies an alternate uri e g a urn or remote url for their term in vocab or ontology or whole ontology or user specifies they want a urn not a uri user submits vocab or ontology what is the expected output a uri for term is set appropriately instead of to the default a to their value b to appropriate urn b service to resolve all known uris knows how to get metadata and data rdf for this uri whether term or ontology c various other features of the registry have to be modified accordingly please use labels and text to provide additional information a i think we resolve the default url but indicate it is not the resource uri b if the uri is a resolvable url point to it using some button original issue
| 1
|
490,843
| 14,140,834,251
|
IssuesEvent
|
2020-11-10 11:47:37
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Newly registered users are being added to REGISTERED group with pending status
|
high priority
|
`user_pending` should be set to 0 when a user is added to a group. Doesn't really affect osu-web systems, but can cause weirdness like [seen here](https://github.com/ppy/osu-web/issues/6851).
I've applied the fix retroactively and will do so once more once this has been fixed in the registration process.
|
1.0
|
Newly registered users are being added to REGISTERED group with pending status - `user_pending` should be set to 0 when a user is added to a group. Doesn't really affect osu-web systems, but can cause weirdness like [seen here](https://github.com/ppy/osu-web/issues/6851).
I've applied the fix retroactively and will do so once more once this has been fixed in the registration process.
|
priority
|
newly registered users are being added to registered group with pending status user pending should be set to when a user is added to a group doesn t really affect osu web systems but can cause weirdness like i ve applied the fix retroactively and will do so once more once this has been fixed in the registration process
| 1
|
616,620
| 19,307,900,552
|
IssuesEvent
|
2021-12-13 13:30:58
|
os-climate/os_c_data_commons
|
https://api.github.com/repos/os-climate/os_c_data_commons
|
closed
|
Trino throws error: __init__() got an unexpected keyword argument 'auth'
|
bug high priority
|
I just hit this error, probably because something was recently changed/broken.
```
/opt/app-root/lib64/python3.8/site-packages/pyhive/trino.py in cursor(self)
53 def cursor(self):
54 """Return a new :py:class:`Cursor` object using the connection."""
---> 55 return Cursor(*self._args, **self._kwargs)
56
57
StatementError: (builtins.TypeError) __init__() got an unexpected keyword argument 'auth'
[SQL: show schemas in osc_datacommons_dev]
```
@HumairAK @redmikhail @caldeirav
|
1.0
|
Trino throws error: __init__() got an unexpected keyword argument 'auth' - I just hit this error, probably because something was recently changed/broken.
```
/opt/app-root/lib64/python3.8/site-packages/pyhive/trino.py in cursor(self)
53 def cursor(self):
54 """Return a new :py:class:`Cursor` object using the connection."""
---> 55 return Cursor(*self._args, **self._kwargs)
56
57
StatementError: (builtins.TypeError) __init__() got an unexpected keyword argument 'auth'
[SQL: show schemas in osc_datacommons_dev]
```
@HumairAK @redmikhail @caldeirav
|
priority
|
trino throws error init got an unexpected keyword argument auth i just hit this error probably because something was recently changed broken opt app root site packages pyhive trino py in cursor self def cursor self return a new py class cursor object using the connection return cursor self args self kwargs statementerror builtins typeerror init got an unexpected keyword argument auth humairak redmikhail caldeirav
| 1
|
590,412
| 17,777,409,747
|
IssuesEvent
|
2021-08-30 21:10:53
|
rubyforgood/casa
|
https://api.github.com/repos/rubyforgood/casa
|
opened
|
Allow volunteers to add and edit court orders
|
Priority: High
|
**What type(s) of user does this feature affect?**
- volunteers
**Description**
Allow volunteers to enter/modify court orders
Possibly let them delete court orders too. We don't know yet. Will update
**Screenshots of current behavior, if any**

Visit a case as a volunteer and edit the case

This functionality already exists for supervisors and admins
_QA Login Details:_
[Link to QA site](https://casa-qa.herokuapp.com/)
Login Emails:
- volunteer1@example.com view site as a volunteer
- supervisor1@example.com view site as a supervisor
password for all users: 123456
|
1.0
|
Allow volunteers to add and edit court orders - **What type(s) of user does this feature affect?**
- volunteers
**Description**
Allow volunteers to enter/modify court orders
Possibly let them delete court orders too. We don't know yet. Will update
**Screenshots of current behavior, if any**

Visit a case as a volunteer and edit the case

This functionality already exists for supervisors and admins
_QA Login Details:_
[Link to QA site](https://casa-qa.herokuapp.com/)
Login Emails:
- volunteer1@example.com view site as a volunteer
- supervisor1@example.com view site as a supervisor
password for all users: 123456
|
priority
|
allow volunteers to add and edit court orders what type s of user does this feature affect volunteers description allow volunteers to enter modify court orders possibly let them delete court orders too we don t know yet will update screenshots of current behavior if any visit a case as a volunteer and edit the case this functionality already exists for supervisors and admins qa login details login emails example com view site as a volunteer example com view site as a supervisor password for all users
| 1
|
672,717
| 22,837,570,745
|
IssuesEvent
|
2022-07-12 18:13:23
|
firelab/windninja
|
https://api.github.com/repos/firelab/windninja
|
closed
|
Update LANDFIRE codes
|
enhancement priority:high component:core
|
The 2019 codes are available here:
https://landfire.gov/lf_prodcodes.php
We should double check that this new data is stable before switching though.
|
1.0
|
Update LANDFIRE codes - The 2019 codes are available here:
https://landfire.gov/lf_prodcodes.php
We should double check that this new data is stable before switching though.
|
priority
|
update landfire codes the codes are available here we should double check that this new data is stable before switching though
| 1
|
336,213
| 10,173,333,878
|
IssuesEvent
|
2019-08-08 12:53:55
|
poanetwork/blockscout
|
https://api.github.com/repos/poanetwork/blockscout
|
closed
|
Divide API and web application
|
api enhancement priority: high
|
We should be able to launch Blockscout API endpoints separately from the web application. Perhaps, in order to have a strong logical separation, it worth to implement API endpoints as a separate Umbrella project.
|
1.0
|
Divide API and web application - We should be able to launch Blockscout API endpoints separately from the web application. Perhaps, in order to have a strong logical separation, it worth to implement API endpoints as a separate Umbrella project.
|
priority
|
divide api and web application we should be able to launch blockscout api endpoints separately from the web application perhaps in order to have a strong logical separation it worth to implement api endpoints as a separate umbrella project
| 1
|
306,850
| 9,412,226,017
|
IssuesEvent
|
2019-04-10 03:03:45
|
MattyJNuval/Project_Nexus
|
https://api.github.com/repos/MattyJNuval/Project_Nexus
|
closed
|
Create Equipment System - Create "Equip" function
|
Coding Priority: Very High Task
|
Equip function should change the stats of the player accordingly.
|
1.0
|
Create Equipment System - Create "Equip" function - Equip function should change the stats of the player accordingly.
|
priority
|
create equipment system create equip function equip function should change the stats of the player accordingly
| 1
|
160,416
| 6,088,919,887
|
IssuesEvent
|
2017-06-19 01:57:43
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
reopened
|
confignetwork produces errors but returns with 0 return code.
|
component:network priority:high sprint1 type:bug
|
I attempted to use confignetwork instead of confnic's and it spit out error messages but returned 0 as a return code, and a result the errors went unnoticed:
```
updatenode c460mgt02 --scripts=confignetwork
c460mgt02: [I]: All valid nics and device list:
c460mgt02: [I]: enP1p12s0f0
c460mgt02: [I]: enP1p12s0f1
c460mgt02: [I]: enP1p12s0f2
c460mgt02: [I]: enP1p12s0f3
c460mgt02: [I]: [E]: Error: pair is invalid nic and nicdevice pair.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f0
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f0.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f1
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f1.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f2
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f2.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f3
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f3.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : [E]: Error: pair is invalid nic and nicdevice pair.
c460mgt02: [E]: Error : please check nictypes for [E]: Error: pair is invalid nic and nicdevice pair..
c460mgt02: postscript: confignetwork exited with code 0
c460mgt02: Running of postscripts has completed.
```
|
1.0
|
confignetwork produces errors but returns with 0 return code. - I attempted to use confignetwork instead of confnic's and it spit out error messages but returned 0 as a return code, and a result the errors went unnoticed:
```
updatenode c460mgt02 --scripts=confignetwork
c460mgt02: [I]: All valid nics and device list:
c460mgt02: [I]: enP1p12s0f0
c460mgt02: [I]: enP1p12s0f1
c460mgt02: [I]: enP1p12s0f2
c460mgt02: [I]: enP1p12s0f3
c460mgt02: [I]: [E]: Error: pair is invalid nic and nicdevice pair.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f0
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f0.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f1
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f1.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f2
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f2.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : enP1p12s0f3
c460mgt02: [E]: Error : please check nictypes for enP1p12s0f3.
c460mgt02: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
c460mgt02: configure nic and its device : [E]: Error: pair is invalid nic and nicdevice pair.
c460mgt02: [E]: Error : please check nictypes for [E]: Error: pair is invalid nic and nicdevice pair..
c460mgt02: postscript: confignetwork exited with code 0
c460mgt02: Running of postscripts has completed.
```
|
priority
|
confignetwork produces errors but returns with return code i attempted to use confignetwork instead of confnic s and it spit out error messages but returned as a return code and a result the errors went unnoticed updatenode scripts confignetwork all valid nics and device list error pair is invalid nic and nicdevice pair configure nic and its device error please check nictypes for configure nic and its device error please check nictypes for configure nic and its device error please check nictypes for configure nic and its device error please check nictypes for configure nic and its device error pair is invalid nic and nicdevice pair error please check nictypes for error pair is invalid nic and nicdevice pair postscript confignetwork exited with code running of postscripts has completed
| 1
|
661,456
| 22,055,315,506
|
IssuesEvent
|
2022-05-30 12:22:38
|
owncloud/ocis
|
https://api.github.com/repos/owncloud/ocis
|
closed
|
Depth: 0 propfind on https://ocis.owncloud.com/remote.php/webdav/ returns multiple items, when Shares exist
|
Type:Bug Status:Needs-Info Priority:p2-high
|
When performing a Depth: 0 propfind on the legacy endpoint, with existing shares, we get multiple items reported back.
Expected:
The response is supposed to report a single item.
```
05-05 17:13:19:550 [ info sync.httplogger ]: "a243a96f-3087-41eb-ba50-f4309be21bc7: Request: PROPFIND https://ocis.owncloud.com/remote.php/webdav/ Header: { Depth: 0, Authorization: Bearer [redacted], User-Agent: Mozilla/5.0 (Windows) mirall/2.11.0.0-git (ownCloud, windows-10.0.22000 ClientArchitecture: x86_64 OsArchitecture: x86_64), Accept: */*, Content-Type: text/xml; charset=utf-8, X-Request-ID: a243a96f-3087-41eb-ba50-f4309be21bc7, Original-Request-ID: a243a96f-3087-41eb-ba50-f4309be21bc7, Content-Length: 117, } Data: [<?xml version=\"1.0\" encoding=\"utf-8\"?><d:propfind xmlns:d=\"DAV:\"><d:prop><d:getlastmodified/></d:prop>M</d:propfind>\n]"
05-05 17:13:19:888 [ info sync.httplogger ]: "a243a96f-3087-41eb-ba50-f4309be21bc7: Response: PROPFIND 207 https://ocis.owncloud.com/remote.php/webdav/ Header: { Access-Control-Allow-Origin: *, Access-Control-Expose-Headers: Tus-Resumable, Tus-Version, Tus-Extension, Content-Length: 534, Content-Security-Policy: default-src 'none';, Content-Type: application/xml; charset=utf-8, Date: Thu, 05 May 2022 15:13:22 GMT, Dav: 1, 3, extended-mkcol, Referrer-Policy: strict-origin-when-cross-origin, Strict-Transport-Security: max-age=315360000; preload, Tus-Extension: creation,creation-with-upload,checksum,expiration, Tus-Resumable: 1.0.0, Tus-Version: 1.0.0, X-Content-Type-Options: nosniff, X-Download-Options: noopen, X-Frame-Options: SAMEORIGIN, X-Permitted-Cross-Domain-Policies: none, X-Robots-Tag: none, X-Xss-Protection: 0, } Data: [
<d:multistatus xmlns:s=\"http://sabredav.org/ns\" xmlns:d=\"DAV:\" xmlns:oc=\"http://owncloud.org/ns\"><d:response><d:href>/remote.php/webdav/</d:href><d:propstat><d:prop><d:getlastmodified>Thu, 05 May 2022 15:12:56 GMT</d:getlastmodified></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response><d:response><d:href>/remote.php/webdav/Shares/</d:href><d:propstat><d:prop><d:getlastmodified>Thu, 05 May 2022 15:12:29 GMT</d:getlastmodified></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response></d:multistatus>
]"
```
|
1.0
|
Depth: 0 propfind on https://ocis.owncloud.com/remote.php/webdav/ returns multiple items, when Shares exist - When performing a Depth: 0 propfind on the legacy endpoint, with existing shares, we get multiple items reported back.
Expected:
The response is supposed to report a single item.
```
05-05 17:13:19:550 [ info sync.httplogger ]: "a243a96f-3087-41eb-ba50-f4309be21bc7: Request: PROPFIND https://ocis.owncloud.com/remote.php/webdav/ Header: { Depth: 0, Authorization: Bearer [redacted], User-Agent: Mozilla/5.0 (Windows) mirall/2.11.0.0-git (ownCloud, windows-10.0.22000 ClientArchitecture: x86_64 OsArchitecture: x86_64), Accept: */*, Content-Type: text/xml; charset=utf-8, X-Request-ID: a243a96f-3087-41eb-ba50-f4309be21bc7, Original-Request-ID: a243a96f-3087-41eb-ba50-f4309be21bc7, Content-Length: 117, } Data: [<?xml version=\"1.0\" encoding=\"utf-8\"?><d:propfind xmlns:d=\"DAV:\"><d:prop><d:getlastmodified/></d:prop>M</d:propfind>\n]"
05-05 17:13:19:888 [ info sync.httplogger ]: "a243a96f-3087-41eb-ba50-f4309be21bc7: Response: PROPFIND 207 https://ocis.owncloud.com/remote.php/webdav/ Header: { Access-Control-Allow-Origin: *, Access-Control-Expose-Headers: Tus-Resumable, Tus-Version, Tus-Extension, Content-Length: 534, Content-Security-Policy: default-src 'none';, Content-Type: application/xml; charset=utf-8, Date: Thu, 05 May 2022 15:13:22 GMT, Dav: 1, 3, extended-mkcol, Referrer-Policy: strict-origin-when-cross-origin, Strict-Transport-Security: max-age=315360000; preload, Tus-Extension: creation,creation-with-upload,checksum,expiration, Tus-Resumable: 1.0.0, Tus-Version: 1.0.0, X-Content-Type-Options: nosniff, X-Download-Options: noopen, X-Frame-Options: SAMEORIGIN, X-Permitted-Cross-Domain-Policies: none, X-Robots-Tag: none, X-Xss-Protection: 0, } Data: [
<d:multistatus xmlns:s=\"http://sabredav.org/ns\" xmlns:d=\"DAV:\" xmlns:oc=\"http://owncloud.org/ns\"><d:response><d:href>/remote.php/webdav/</d:href><d:propstat><d:prop><d:getlastmodified>Thu, 05 May 2022 15:12:56 GMT</d:getlastmodified></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response><d:response><d:href>/remote.php/webdav/Shares/</d:href><d:propstat><d:prop><d:getlastmodified>Thu, 05 May 2022 15:12:29 GMT</d:getlastmodified></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response></d:multistatus>
]"
```
|
priority
|
depth propfind on returns multiple items when shares exist when performing a depth propfind on the legacy endpoint with existing shares we get multiple items reported back expected the response is supposed to report a single item request propfind header depth authorization bearer user agent mozilla windows mirall git owncloud windows clientarchitecture osarchitecture accept content type text xml charset utf x request id original request id content length data response propfind header access control allow origin access control expose headers tus resumable tus version tus extension content length content security policy default src none content type application xml charset utf date thu may gmt dav extended mkcol referrer policy strict origin when cross origin strict transport security max age preload tus extension creation creation with upload checksum expiration tus resumable tus version x content type options nosniff x download options noopen x frame options sameorigin x permitted cross domain policies none x robots tag none x xss protection data http ok remote php webdav shares thu may gmt http ok
| 1
|
78,015
| 3,508,694,549
|
IssuesEvent
|
2016-01-08 19:04:51
|
twirpx/twirpx-com-public
|
https://api.github.com/repos/twirpx/twirpx-com-public
|
closed
|
поиск - фильтр по типам файлов
|
feature request high priority
|
> **Описание проблемы**
> В результатах поиска по сайту бывает сложно найти нужный ресурс, особенно если запрос нечеткий (например результаты поиска "численные методы решения краевых задач").
> Результатов поиска очень много. При этом, если ввести запрос (например результаты поиска "курсовая численные методы решения краевых задач") то результаты поиска не соответствуют действительности - всего два результата, и среди них нет курсовых работ (хотя на портале они есть).
> **Решение проблемы**
> Предлагаю добавить фильтры на странице с результатами поиска, соответствующие категориям.
> Ресурс постоянно растет, и с уверенностью могу сказать, что поиск материала становится более затруднительным.
> Как сверхзадача - можно было бы выводить не фиксированные фильтры для всех запросов, а предлагать наиболее вероятные фильтры, анализируя результаты поиска. И при этом предлагаемые фильтры будут отличаться для разных запросов.
|
1.0
|
поиск - фильтр по типам файлов - > **Описание проблемы**
> В результатах поиска по сайту бывает сложно найти нужный ресурс, особенно если запрос нечеткий (например результаты поиска "численные методы решения краевых задач").
> Результатов поиска очень много. При этом, если ввести запрос (например результаты поиска "курсовая численные методы решения краевых задач") то результаты поиска не соответствуют действительности - всего два результата, и среди них нет курсовых работ (хотя на портале они есть).
> **Решение проблемы**
> Предлагаю добавить фильтры на странице с результатами поиска, соответствующие категориям.
> Ресурс постоянно растет, и с уверенностью могу сказать, что поиск материала становится более затруднительным.
> Как сверхзадача - можно было бы выводить не фиксированные фильтры для всех запросов, а предлагать наиболее вероятные фильтры, анализируя результаты поиска. И при этом предлагаемые фильтры будут отличаться для разных запросов.
|
priority
|
поиск фильтр по типам файлов описание проблемы в результатах поиска по сайту бывает сложно найти нужный ресурс особенно если запрос нечеткий например результаты поиска численные методы решения краевых задач результатов поиска очень много при этом если ввести запрос например результаты поиска курсовая численные методы решения краевых задач то результаты поиска не соответствуют действительности всего два результата и среди них нет курсовых работ хотя на портале они есть решение проблемы предлагаю добавить фильтры на странице с результатами поиска соответствующие категориям ресурс постоянно растет и с уверенностью могу сказать что поиск материала становится более затруднительным как сверхзадача можно было бы выводить не фиксированные фильтры для всех запросов а предлагать наиболее вероятные фильтры анализируя результаты поиска и при этом предлагаемые фильтры будут отличаться для разных запросов
| 1
|
271,108
| 8,476,216,493
|
IssuesEvent
|
2018-10-24 21:12:44
|
lgou2w/ldk
|
https://api.github.com/repos/lgou2w/ldk
|
closed
|
Synchronized future tasks are not executed in the expected server thread.
|
Bug Priority: Highest
|
See below:
```kotlin
else task.invoke()
```
https://github.com/lgou2w/ldk/blob/c4264fb44ae814c973dea10ef5dbd7fc10462440/ldk-bukkit/ldk-bukkit-common/src/main/kotlin/com/lgou2w/ldk/bukkit/Extended.kt#L71-L77
|
1.0
|
Synchronized future tasks are not executed in the expected server thread. - See below:
```kotlin
else task.invoke()
```
https://github.com/lgou2w/ldk/blob/c4264fb44ae814c973dea10ef5dbd7fc10462440/ldk-bukkit/ldk-bukkit-common/src/main/kotlin/com/lgou2w/ldk/bukkit/Extended.kt#L71-L77
|
priority
|
synchronized future tasks are not executed in the expected server thread see below kotlin else task invoke
| 1
|
395,607
| 11,689,382,200
|
IssuesEvent
|
2020-03-05 15:59:06
|
Materials-Consortia/optimade-python-tools
|
https://api.github.com/repos/Materials-Consortia/optimade-python-tools
|
closed
|
Query parameters not handled correctly
|
priority/high
|
According to the discussion in Materials-Consortia/OPTiMaDe#259, we are not handling the query parameters correctly. Indeed the following should be the case:
| URL (Query part) | How to handle it |
|:---:|:---:|
| `?parameter` | Return `400 Bad Request` |
| `?parameter=` | Evaluate value as `""`, i.e., user has deliberately set the parameter with an empty input |
| `?parameter=value` | Evaluate whether `value` makes sense for `parameter` (normal handling) |
|
1.0
|
Query parameters not handled correctly - According to the discussion in Materials-Consortia/OPTiMaDe#259, we are not handling the query parameters correctly. Indeed the following should be the case:
| URL (Query part) | How to handle it |
|:---:|:---:|
| `?parameter` | Return `400 Bad Request` |
| `?parameter=` | Evaluate value as `""`, i.e., user has deliberately set the parameter with an empty input |
| `?parameter=value` | Evaluate whether `value` makes sense for `parameter` (normal handling) |
|
priority
|
query parameters not handled correctly according to the discussion in materials consortia optimade we are not handling the query parameters correctly indeed the following should be the case url query part how to handle it parameter return bad request parameter evaluate value as i e user has deliberately set the parameter with an empty input parameter value evaluate whether value makes sense for parameter normal handling
| 1
|
284,201
| 8,736,588,784
|
IssuesEvent
|
2018-12-11 19:57:26
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Tutorial does not detect that 10 logs were placed in stockpile
|
High Priority
|
Have placed 10 logs in stockpile but tutorial does not continue.
|
1.0
|
Tutorial does not detect that 10 logs were placed in stockpile - Have placed 10 logs in stockpile but tutorial does not continue.
|
priority
|
tutorial does not detect that logs were placed in stockpile have placed logs in stockpile but tutorial does not continue
| 1
|
722,312
| 24,858,288,662
|
IssuesEvent
|
2022-10-27 05:44:04
|
AY2223S1-CS2103-F09-2/tp
|
https://api.github.com/repos/AY2223S1-CS2103-F09-2/tp
|
closed
|
Bugged legend for statistics window
|
type.Bug priority.High
|

The image above shows an example of when the legend goes out of its own bounds. This is a minor bug that needs to be fixed.
|
1.0
|
Bugged legend for statistics window -

The image above shows an example of when the legend goes out of its own bounds. This is a minor bug that needs to be fixed.
|
priority
|
bugged legend for statistics window the image above shows an example of when the legend goes out of its own bounds this is a minor bug that needs to be fixed
| 1
|
547,180
| 16,039,206,752
|
IssuesEvent
|
2021-04-22 04:54:47
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
closed
|
Checkout Block: wc.wcSettings.setSetting is not a function
|
priority: high size: medium type: bug
|
The checkout block integration is using the deprecated (and now removed) `setSetting` function (https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/3019).
Replicate steps:
1. When using the checkout block, attempt a checkout using an SCA test card (like `4000002500003155`)
2. The error `wc.wcSettings.setSetting is not a function` is displayed above the card details form

|
1.0
|
Checkout Block: wc.wcSettings.setSetting is not a function - The checkout block integration is using the deprecated (and now removed) `setSetting` function (https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/3019).
Replicate steps:
1. When using the checkout block, attempt a checkout using an SCA test card (like `4000002500003155`)
2. The error `wc.wcSettings.setSetting is not a function` is displayed above the card details form

|
priority
|
checkout block wc wcsettings setsetting is not a function the checkout block integration is using the deprecated and now removed setsetting function replicate steps when using the checkout block attempt a checkout using an sca test card like the error wc wcsettings setsetting is not a function is displayed above the card details form
| 1
|
438,745
| 12,644,056,033
|
IssuesEvent
|
2020-06-16 10:53:50
|
ahmedkaludi/pwa-for-wp
|
https://api.github.com/repos/ahmedkaludi/pwa-for-wp
|
closed
|
fatal error 1 star
|
High Priority bug
|
FastCGI sent in stderr: “PHP message: PHP Fatal error: Uncaught Error: Class ‘Elementor\Plugin’ not found in /public_html/
wp-content/plugins/pwa-for-wp/admin/common-function.php:101
https://wordpress.org/support/topic/the-update-killed-the-site/#post-12771835
|
1.0
|
fatal error 1 star - FastCGI sent in stderr: “PHP message: PHP Fatal error: Uncaught Error: Class ‘Elementor\Plugin’ not found in /public_html/
wp-content/plugins/pwa-for-wp/admin/common-function.php:101
https://wordpress.org/support/topic/the-update-killed-the-site/#post-12771835
|
priority
|
fatal error star fastcgi sent in stderr “php message php fatal error uncaught error class ‘elementor plugin’ not found in public html wp content plugins pwa for wp admin common function php
| 1
|
160,227
| 6,085,149,628
|
IssuesEvent
|
2017-06-17 11:58:14
|
k0shk0sh/FastHub
|
https://api.github.com/repos/k0shk0sh/FastHub
|
closed
|
Public Gists not opening
|
Priority: High Status: Completed Type: Bug
|
**App Version: 3.1.0**
**OS Version: 23**
**Model: LENOVO-Lenovo K50a40**
When clicking the link to the gist in the browser, the FastHub app opens automatically and says "No gist found".
While the gist is opening just great in the browser when checked without auto launch to FastHub.
|
1.0
|
Public Gists not opening - **App Version: 3.1.0**
**OS Version: 23**
**Model: LENOVO-Lenovo K50a40**
When clicking the link to the gist in the browser, the FastHub app opens automatically and says "No gist found".
While the gist is opening just great in the browser when checked without auto launch to FastHub.
|
priority
|
public gists not opening app version os version model lenovo lenovo when clicking the link to the gist in the browser the fasthub app opens automatically and says no gist found while the gist is opening just great in the browser when checked without auto launch to fasthub
| 1
|
466,091
| 13,396,867,338
|
IssuesEvent
|
2020-09-03 10:37:15
|
wso2/docs-ei
|
https://api.github.com/repos/wso2/docs-ei
|
closed
|
Doc Feedback:
|
Priority/Highest Severity/Critical micro-integrator
|
Location : https://ei.docs.wso2.com/en/7.1.0/micro-integrator/use-cases/tutorials/file-processing/
Directions state:
"Click the '+' icon in the lower section and add the following drivers and libraries.
(https://github.com/wso2-docs/WSO2_EI/blob/master/Integration-Tutorial-Artifacts/Artifacts-fileProcessingTutorial.zip)
MySQL database driver.
CSV smooks library.
Note
These are copied to the /lib folder of the embedded Micro Integrator."
The read me files inside say different areas. For example "Copy the smooks-config.xml file in to resources/ directory."
Need guidance on where exactly to put the demo files within micro integrator to complete this exercise.
|
1.0
|
Doc Feedback: - Location : https://ei.docs.wso2.com/en/7.1.0/micro-integrator/use-cases/tutorials/file-processing/
Directions state:
"Click the '+' icon in the lower section and add the following drivers and libraries.
(https://github.com/wso2-docs/WSO2_EI/blob/master/Integration-Tutorial-Artifacts/Artifacts-fileProcessingTutorial.zip)
MySQL database driver.
CSV smooks library.
Note
These are copied to the /lib folder of the embedded Micro Integrator."
The read me files inside say different areas. For example "Copy the smooks-config.xml file in to resources/ directory."
Need guidance on where exactly to put the demo files within micro integrator to complete this exercise.
|
priority
|
doc feedback location directions state click the icon in the lower section and add the following drivers and libraries mysql database driver csv smooks library note these are copied to the lib folder of the embedded micro integrator the read me files inside say different areas for example copy the smooks config xml file in to resources directory need guidance on where exactly to put the demo files within micro integrator to complete this exercise
| 1
|
821,543
| 30,826,467,544
|
IssuesEvent
|
2023-08-01 20:30:10
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Can't reset root node type to Node3D in advanced import settings
|
bug topic:editor topic:import high priority
|
### Godot version
4.0.3 and latest 4.1 (2d6b880987bc600cda586b281fcbe26791e92e09)
### System information
Manjaro Linux
### Issue description
When you change the root type of an imported scene (in my case it's a glb file) you can't change it back to a Node3D. Changing it to any other 3D node works though.
### Steps to reproduce
1. Import a glb file
2. Double click it to open the advanced import settings and change the root type to StaticBody3D or any other type that inherits from Node3D. Click reimport.
3. Now try to change it from StaticBody3D to Node3D. This will fail silently.
### Minimal reproduction project
[RootTypeBug.zip](https://github.com/godotengine/godot/files/11725039/RootTypeBug.zip)
|
1.0
|
Can't reset root node type to Node3D in advanced import settings - ### Godot version
4.0.3 and latest 4.1 (2d6b880987bc600cda586b281fcbe26791e92e09)
### System information
Manjaro Linux
### Issue description
When you change the root type of an imported scene (in my case it's a glb file) you can't change it back to a Node3D. Changing it to any other 3D node works though.
### Steps to reproduce
1. Import a glb file
2. Double click it to open the advanced import settings and change the root type to StaticBody3D or any other type that inherits from Node3D. Click reimport.
3. Now try to change it from StaticBody3D to Node3D. This will fail silently.
### Minimal reproduction project
[RootTypeBug.zip](https://github.com/godotengine/godot/files/11725039/RootTypeBug.zip)
|
priority
|
can t reset root node type to in advanced import settings godot version and latest system information manjaro linux issue description when you change the root type of an imported scene in my case it s a glb file you can t change it back to a changing it to any other node works though steps to reproduce import a glb file double click it to open the advanced import settings and change the root type to or any other type that inherits from click reimport now try to change it from to this will fail silently minimal reproduction project
| 1
|
25,187
| 2,677,847,883
|
IssuesEvent
|
2015-03-26 04:38:00
|
cs2103jan2015-w14-4j/main
|
https://api.github.com/repos/cs2103jan2015-w14-4j/main
|
opened
|
change the way the file saved
|
priority.high
|
instead of calling FileStorage (Wei Quan), I will call SystemHandler (Mun Aw) saveToFileForTask method now.
|
1.0
|
change the way the file saved - instead of calling FileStorage (Wei Quan), I will call SystemHandler (Mun Aw) saveToFileForTask method now.
|
priority
|
change the way the file saved instead of calling filestorage wei quan i will call systemhandler mun aw savetofilefortask method now
| 1
|
404,293
| 11,854,774,696
|
IssuesEvent
|
2020-03-25 01:58:27
|
StudioTBA/CoronaIO
|
https://api.github.com/repos/StudioTBA/CoronaIO
|
opened
|
Demo of AI behaviors
|
Priority: High
|
**Is your feature request related to a problem? Please describe.**
Per the requirements, there should be an environment where the different AI behaviors implemented can be demonstrated.
**Describe the solution you would like**
One scene with one `DemoManager` or `GameManager` script that has public options for selecting what agents and what behaviors are demonstrated when the scene is played.
**Describe alternatives you have considered**
Multiple scenes would also be valid.
|
1.0
|
Demo of AI behaviors - **Is your feature request related to a problem? Please describe.**
Per the requirements, there should be an environment where the different AI behaviors implemented can be demonstrated.
**Describe the solution you would like**
One scene with one `DemoManager` or `GameManager` script that has public options for selecting what agents and what behaviors are demonstrated when the scene is played.
**Describe alternatives you have considered**
Multiple scenes would also be valid.
|
priority
|
demo of ai behaviors is your feature request related to a problem please describe per the requirements there should be an environment where the different ai behaviors implemented can be demonstrated describe the solution you would like one scene with one demomanager or gamemanager script that has public options for selecting what agents and what behaviors are demonstrated when the scene is played describe alternatives you have considered multiple scenes would also be valid
| 1
|
725,665
| 24,970,906,012
|
IssuesEvent
|
2022-11-02 00:59:43
|
WordPress/Learn
|
https://api.github.com/repos/WordPress/Learn
|
closed
|
Add language meta field for lesson plans
|
[Type] Enhancement [Priority] High [Component] Lesson Plans
|
Workshops have an autocomplete meta field for the workshop language, but lesson plans do not. We need to add this same meta field to lesson plans and then filter the lesson plan archive page based on the language the site is currently being viewed in (which is how workshops currently work).
|
1.0
|
Add language meta field for lesson plans - Workshops have an autocomplete meta field for the workshop language, but lesson plans do not. We need to add this same meta field to lesson plans and then filter the lesson plan archive page based on the language the site is currently being viewed in (which is how workshops currently work).
|
priority
|
add language meta field for lesson plans workshops have an autocomplete meta field for the workshop language but lesson plans do not we need to add this same meta field to lesson plans and then filter the lesson plan archive page based on the language the site is currently being viewed in which is how workshops currently work
| 1
|
568,085
| 16,946,377,501
|
IssuesEvent
|
2021-06-28 07:25:00
|
norbit8/MoodleBooster
|
https://api.github.com/repos/norbit8/MoodleBooster
|
closed
|
Adding scraping functionality to the MoodleBooster
|
High Priority enhancement
|
# What
Some scraping functionality that by giving URL and CSS selector would return DOM element.
# Why
We want to add some scraping functionality to the MoodleBooster so we can collect important data and present it to the user in some main location across Moodle's website
Like achieving ordering the courses list by course's semester.
|
1.0
|
Adding scraping functionality to the MoodleBooster - # What
Some scraping functionality that by giving URL and CSS selector would return DOM element.
# Why
We want to add some scraping functionality to the MoodleBooster so we can collect important data and present it to the user in some main location across Moodle's website
Like achieving ordering the courses list by course's semester.
|
priority
|
adding scraping functionality to the moodlebooster what some scraping functionality that by giving url and css selector would return dom element why we want to add some scraping functionality to the moodlebooster so we can collect important data and present it to the user in some main location across moodle s website like achieving ordering the courses list by course s semester
| 1
|
744,456
| 25,943,832,863
|
IssuesEvent
|
2022-12-16 21:31:58
|
eugenemel/maven
|
https://api.github.com/repos/eugenemel/maven
|
closed
|
Peak bounds may not be working properly
|
bug high_priority SEC Proteomics
|
The peak bounds should extend all the way to the edge of the SEC bounds, but they appear to stop one fraction short:
<img width="1233" alt="ldha_peak_bounds" src="https://user-images.githubusercontent.com/1757701/207170867-13822cb3-961a-4da4-99f6-925b41c3da53.png">
|
1.0
|
Peak bounds may not be working properly - The peak bounds should extend all the way to the edge of the SEC bounds, but they appear to stop one fraction short:
<img width="1233" alt="ldha_peak_bounds" src="https://user-images.githubusercontent.com/1757701/207170867-13822cb3-961a-4da4-99f6-925b41c3da53.png">
|
priority
|
peak bounds may not be working properly the peak bounds should extend all the way to the edge of the sec bounds but they appear to stop one fraction short img width alt ldha peak bounds src
| 1
|
175,280
| 6,548,972,046
|
IssuesEvent
|
2017-09-05 03:19:03
|
kamal1978/LTFHC
|
https://api.github.com/repos/kamal1978/LTFHC
|
opened
|
Patient appears twice in register
|
bug high priority
|
Data of one patient appears twice in the register while it was entered once by the HCW. See picture.
I think the login name associated with this is either bkasanga or ekalwele. I will follow up with them to find out if they uploaded the data.

|
1.0
|
Patient appears twice in register - Data of one patient appears twice in the register while it was entered once by the HCW. See picture.
I think the login name associated with this is either bkasanga or ekalwele. I will follow up with them to find out if they uploaded the data.

|
priority
|
patient appears twice in register data of one patient appears twice in the register while it was entered once by the hcw see picture i think the login name associated with this is either bkasanga or ekalwele i will follow up with them to find out if they uploaded the data
| 1
|
831,728
| 32,059,750,739
|
IssuesEvent
|
2023-09-24 14:16:10
|
oithxs/hira-chan
|
https://api.github.com/repos/oithxs/hira-chan
|
opened
|
開発環境にNginxを導入する
|
Type: Feature Priority: High
|
## 変更の概要
- Laravel を `php artisan serve` で動かすのではなく, Nginx を用いて動作させる
## 目的
- #289 において,`php artisan serve` で起動したサーバでは Googleアカウントによる認証(callback)が不安定だったため
## タスク
- [ ] `hira-chan_app` に Nginx を追加するか Nginx コンテナを追加するかして,Nginx で Laravel にアクセスする(できれば後者で)
## その他
なし
|
1.0
|
開発環境にNginxを導入する - ## 変更の概要
- Laravel を `php artisan serve` で動かすのではなく, Nginx を用いて動作させる
## 目的
- #289 において,`php artisan serve` で起動したサーバでは Googleアカウントによる認証(callback)が不安定だったため
## タスク
- [ ] `hira-chan_app` に Nginx を追加するか Nginx コンテナを追加するかして,Nginx で Laravel にアクセスする(できれば後者で)
## その他
なし
|
priority
|
開発環境にnginxを導入する 変更の概要 laravel を php artisan serve で動かすのではなく, nginx を用いて動作させる 目的 において, php artisan serve で起動したサーバでは googleアカウントによる認証(callback)が不安定だったため タスク hira chan app に nginx を追加するか nginx コンテナを追加するかして,nginx で laravel にアクセスする(できれば後者で) その他 なし
| 1
|
491,718
| 14,169,841,427
|
IssuesEvent
|
2020-11-12 13:47:15
|
kubermatic/kubermatic
|
https://api.github.com/repos/kubermatic/kubermatic
|
closed
|
Missing CSI DaemonSet on SLES
|
priority/high team/lifecycle
|
**User Story**
When we added SLES, we forgot to include an appropriate DaemonSet manifest in the CSI addon.
**Acceptance criteria**
- [ ] There is a DaemonSet for SLES in addons/csi/nodeplugin.yaml
- [ ] Volumes work properly on SLES
- [ ] the changes are backported to Kubermatic v2.14
|
1.0
|
Missing CSI DaemonSet on SLES - **User Story**
When we added SLES, we forgot to include an appropriate DaemonSet manifest in the CSI addon.
**Acceptance criteria**
- [ ] There is a DaemonSet for SLES in addons/csi/nodeplugin.yaml
- [ ] Volumes work properly on SLES
- [ ] the changes are backported to Kubermatic v2.14
|
priority
|
missing csi daemonset on sles user story when we added sles we forgot to include an appropriate daemonset manifest in the csi addon acceptance criteria there is a daemonset for sles in addons csi nodeplugin yaml volumes work properly on sles the changes are backported to kubermatic
| 1
|
393,652
| 11,622,938,569
|
IssuesEvent
|
2020-02-27 07:51:17
|
on3iro/aeons-end-randomizer
|
https://api.github.com/repos/on3iro/aeons-end-randomizer
|
closed
|
Expedition import schema broken + error not cleared
|
Priority: High bug
|
* [x] the schema does not yet have "required" fields
* [x] empty objects are allowed
* [x] an empty object will cause a runtime error
* [x] error messages are not cleared upon succesful import
|
1.0
|
Expedition import schema broken + error not cleared - * [x] the schema does not yet have "required" fields
* [x] empty objects are allowed
* [x] an empty object will cause a runtime error
* [x] error messages are not cleared upon succesful import
|
priority
|
expedition import schema broken error not cleared the schema does not yet have required fields empty objects are allowed an empty object will cause a runtime error error messages are not cleared upon succesful import
| 1
|
791,653
| 27,870,821,928
|
IssuesEvent
|
2023-03-21 13:22:10
|
localstack/localstack
|
https://api.github.com/repos/localstack/localstack
|
closed
|
bug: Lambda works on intel but not m1
|
type: bug priority: high
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
We have a localstack setup that works with macbook with intel cpu but fails with m1.
lambda invoke response:
https://github.com/fermanjj/localstack-test
{
"StatusCode": 200,
"FunctionError": "Unhandled",
"LogResult": "",
"ExecutedVersion": "$LATEST"
}
lambda is a Rust binary with provided.al2 runtime. The binary is built with Linux X86-64 architectures cargo-lambda lambda build
If we specify the --arm64 for the build and --architectures arm64 in the cli create-function command then the lambda fails on the intel mac too.
### Expected Behavior
invoking the lambda successfully succeeds
lambda invoke response:
{
"body": ".",
"headers": {
"content-type": "text/plain"
},
"isBase64Encoded": false,
"multiValueHeaders": {
"content-type": [
"text/plain"
]
},
"statusCode": 200
}
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Recreatable code:
https://github.com/fermanjj/localstack-test
./run.sh
### Environment
```markdown
- OS with M1 Processor:12.6
- LocalStack: localstack/localstack-pro:latest
- OS with Intel Processor: 13.2.1
```
### Anything else?
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.835 WARN --- [ asgi_gw_2] localstack.aws.accounts : Ignoring production AWS credentials provided to LocalStack. Falling back to default account ID.
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.841 DEBUG --- [uest_thread)] l.s.awslambda.lambda_api : Running lambda arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.841 INFO --- [uest_thread)] l.s.a.lambda_executors : Running lambda: arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.842 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Putting invocation event (request ID f487f25b) for Lambda 'arn:aws:lambda:us-east-1:000000000000:function:test' to queue
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.842 DEBUG --- [uest_thread)] l.s.a.lambda_launcher : Executing docker separate execution hook for function arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.950 DEBUG --- [uest_thread)] l.u.c.container_client : Getting the entrypoint for image: localstack/lambda:provided.al2
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.955 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Creating container with attributes: {'mount_volumes': None, 'ports': <PortMappings: {}>, 'cap_add': ['NET_ADMIN'], 'cap_drop': None, 'security_opt': None, 'dns': '127.0.0.1', 'additional_flags': '', 'workdir': None, 'privileged': None, 'labels': None, 'command': 'bootstrap', 'detach': False, 'entrypoint': '/tmp/ad2f5444.sh', 'env_vars': {'AWS_ACCESS_KEY_ID': 'test', 'AWS_SECRET_ACCESS_KEY': 'test', 'AWS_REGION': 'us-east-1', 'DOCKER_LAMBDA_USE_STDIN': '1', 'LOCALSTACK_HOSTNAME': '172.24.0.2', 'AWS_ENDPOINT_URL': 'http://172.24.0.2:4566', 'EDGE_PORT': '443', '_HANDLER': 'bootstrap', 'AWS_LAMBDA_FUNCTION_TIMEOUT': '3', 'AWS_LAMBDA_FUNCTION_NAME': 'test', 'AWS_LAMBDA_FUNCTION_VERSION': '$LATEST', 'AWS_LAMBDA_FUNCTION_INVOKED_ARN': 'arn:aws:lambda:us-east-1:000000000000:function:test', 'LOCALSTACK_DEBUG': '1', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE': 1536, '_LAMBDA_RUNTIME': 'provided.al2', 'AWS_LAMBDA_LOG_GROUP_NAME': '/aws/lambda/test', 'AWS_LAMBDA_LOG_STREAM_NAME': 'test', 'AWS_LAMBDA_RUNTIME_API': 'test.us-east-1.localhost.localstack.cloud:4566', 'LOCALSTACK_HOSTS_ENTRY': 'test.us-east-1.localhost.localstack.cloud'}, 'image_name': 'localstack/lambda:provided.al2', 'interactive': True, 'name': None, 'network': 'localstack-test_default', 'platform': None, 'remove': True, 'self': <localstack.utils.container_utils.docker_sdk_client.SdkDockerClient object at 0xffffb4a97370>, 'tty': False, 'user': 'root'}
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.981 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Copying file /tmp/function.zipfile.715678e2/. into 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35:/var/task
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.101 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Copying file /var/lib/localstack/tmp/ad2f5444.sh into 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35:/tmp/ad2f5444.sh
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.156 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Starting container 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.374 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Waiting for Lambda invocation result of request ID f487f25b
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.376 ERROR --- [uest_thread)] l.s.a.lambda_extended : Unable to invoke Lambda "arn:aws:lambda:us-east-1:000000000000:function:test":
2023-03-09 15:53:25 localstack-test | Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 302, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/usr/local/lib/python3.10/queue.py", line 179, in get
2023-03-09 15:53:25 localstack-test | raise Empty
2023-03-09 15:53:25 localstack-test | _queue.Empty
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.383 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Log output for invocation of Lambda "test":
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.398 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Removing container: 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.417 INFO --- [uest_thread)] l.s.awslambda.lambda_api : Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:test: Timeout - Lambda container did not report result after 3 secs Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 302, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/usr/local/lib/python3.10/queue.py", line 179, in get
2023-03-09 15:53:25 localstack-test | raise Empty
2023-03-09 15:53:25 localstack-test | _queue.Empty
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | During handling of the above exception, another exception occurred:
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 462, in run_lambda
2023-03-09 15:53:25 localstack-test | result = LAMBDA_EXECUTOR.execute(
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 350, in execute_local_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 494, in execute
2023-03-09 15:53:25 localstack-test | return do_execute()
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 484, in do_execute
2023-03-09 15:53:25 localstack-test | return _run(func_arn=func_arn)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 183, in wrapped
2023-03-09 15:53:25 localstack-test | raise e
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 179, in wrapped
2023-03-09 15:53:25 localstack-test | result = func(*args, **kwargs)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 471, in _run
2023-03-09 15:53:25 localstack-test | raise e
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 467, in _run
2023-03-09 15:53:25 localstack-test | result = self._execute(lambda_function, inv_context)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 790, in _execute
2023-03-09 15:53:25 localstack-test | result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 346, in run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 295, in run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 306, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | localstack.services.awslambda.lambda_executors.InvocationException: Timeout - Lambda container did not report result after 3 secs
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.417 DEBUG --- [uest_thread)] l.s.awslambda.lambda_api : Lambda invocation duration: 3576.09ms
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.418 INFO --- [ asgi_gw_2] localstack.request.aws : AWS lambda.Invoke => 200
|
1.0
|
bug: Lambda works on intel but not m1 - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
We have a localstack setup that works with macbook with intel cpu but fails with m1.
lambda invoke response:
https://github.com/fermanjj/localstack-test
{
"StatusCode": 200,
"FunctionError": "Unhandled",
"LogResult": "",
"ExecutedVersion": "$LATEST"
}
lambda is a Rust binary with provided.al2 runtime. The binary is built with Linux X86-64 architectures cargo-lambda lambda build
If we specify the --arm64 for the build and --architectures arm64 in the cli create-function command then the lambda fails on the intel mac too.
### Expected Behavior
invoking the lambda successfully succeeds
lambda invoke response:
{
"body": ".",
"headers": {
"content-type": "text/plain"
},
"isBase64Encoded": false,
"multiValueHeaders": {
"content-type": [
"text/plain"
]
},
"statusCode": 200
}
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Recreatable code:
https://github.com/fermanjj/localstack-test
./run.sh
### Environment
```markdown
- OS with M1 Processor:12.6
- LocalStack: localstack/localstack-pro:latest
- OS with Intel Processor: 13.2.1
```
### Anything else?
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.835 WARN --- [ asgi_gw_2] localstack.aws.accounts : Ignoring production AWS credentials provided to LocalStack. Falling back to default account ID.
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.841 DEBUG --- [uest_thread)] l.s.awslambda.lambda_api : Running lambda arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.841 INFO --- [uest_thread)] l.s.a.lambda_executors : Running lambda: arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.842 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Putting invocation event (request ID f487f25b) for Lambda 'arn:aws:lambda:us-east-1:000000000000:function:test' to queue
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.842 DEBUG --- [uest_thread)] l.s.a.lambda_launcher : Executing docker separate execution hook for function arn:aws:lambda:us-east-1:000000000000:function:test
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.950 DEBUG --- [uest_thread)] l.u.c.container_client : Getting the entrypoint for image: localstack/lambda:provided.al2
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.955 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Creating container with attributes: {'mount_volumes': None, 'ports': <PortMappings: {}>, 'cap_add': ['NET_ADMIN'], 'cap_drop': None, 'security_opt': None, 'dns': '127.0.0.1', 'additional_flags': '', 'workdir': None, 'privileged': None, 'labels': None, 'command': 'bootstrap', 'detach': False, 'entrypoint': '/tmp/ad2f5444.sh', 'env_vars': {'AWS_ACCESS_KEY_ID': 'test', 'AWS_SECRET_ACCESS_KEY': 'test', 'AWS_REGION': 'us-east-1', 'DOCKER_LAMBDA_USE_STDIN': '1', 'LOCALSTACK_HOSTNAME': '172.24.0.2', 'AWS_ENDPOINT_URL': 'http://172.24.0.2:4566', 'EDGE_PORT': '443', '_HANDLER': 'bootstrap', 'AWS_LAMBDA_FUNCTION_TIMEOUT': '3', 'AWS_LAMBDA_FUNCTION_NAME': 'test', 'AWS_LAMBDA_FUNCTION_VERSION': '$LATEST', 'AWS_LAMBDA_FUNCTION_INVOKED_ARN': 'arn:aws:lambda:us-east-1:000000000000:function:test', 'LOCALSTACK_DEBUG': '1', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE': 1536, '_LAMBDA_RUNTIME': 'provided.al2', 'AWS_LAMBDA_LOG_GROUP_NAME': '/aws/lambda/test', 'AWS_LAMBDA_LOG_STREAM_NAME': 'test', 'AWS_LAMBDA_RUNTIME_API': 'test.us-east-1.localhost.localstack.cloud:4566', 'LOCALSTACK_HOSTS_ENTRY': 'test.us-east-1.localhost.localstack.cloud'}, 'image_name': 'localstack/lambda:provided.al2', 'interactive': True, 'name': None, 'network': 'localstack-test_default', 'platform': None, 'remove': True, 'self': <localstack.utils.container_utils.docker_sdk_client.SdkDockerClient object at 0xffffb4a97370>, 'tty': False, 'user': 'root'}
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:21.981 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Copying file /tmp/function.zipfile.715678e2/. into 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35:/var/task
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.101 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Copying file /var/lib/localstack/tmp/ad2f5444.sh into 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35:/tmp/ad2f5444.sh
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.156 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Starting container 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35
2023-03-09 15:53:22 localstack-test | 2023-03-09T20:53:22.374 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Waiting for Lambda invocation result of request ID f487f25b
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.376 ERROR --- [uest_thread)] l.s.a.lambda_extended : Unable to invoke Lambda "arn:aws:lambda:us-east-1:000000000000:function:test":
2023-03-09 15:53:25 localstack-test | Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 302, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/usr/local/lib/python3.10/queue.py", line 179, in get
2023-03-09 15:53:25 localstack-test | raise Empty
2023-03-09 15:53:25 localstack-test | _queue.Empty
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.383 DEBUG --- [uest_thread)] l.s.a.lambda_extended : Log output for invocation of Lambda "test":
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.398 DEBUG --- [uest_thread)] l.u.c.docker_sdk_client : Removing container: 314301750cd67996095a4ffed86f14cb861cd3c35b5000bc248d5a597e857f35
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.417 INFO --- [uest_thread)] l.s.awslambda.lambda_api : Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:test: Timeout - Lambda container did not report result after 3 secs Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 302, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/usr/local/lib/python3.10/queue.py", line 179, in get
2023-03-09 15:53:25 localstack-test | raise Empty
2023-03-09 15:53:25 localstack-test | _queue.Empty
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | During handling of the above exception, another exception occurred:
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | Traceback (most recent call last):
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 462, in run_lambda
2023-03-09 15:53:25 localstack-test | result = LAMBDA_EXECUTOR.execute(
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 350, in execute_local_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 494, in execute
2023-03-09 15:53:25 localstack-test | return do_execute()
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 484, in do_execute
2023-03-09 15:53:25 localstack-test | return _run(func_arn=func_arn)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 183, in wrapped
2023-03-09 15:53:25 localstack-test | raise e
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 179, in wrapped
2023-03-09 15:53:25 localstack-test | result = func(*args, **kwargs)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 471, in _run
2023-03-09 15:53:25 localstack-test | raise e
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 467, in _run
2023-03-09 15:53:25 localstack-test | result = self._execute(lambda_function, inv_context)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 790, in _execute
2023-03-09 15:53:25 localstack-test | result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 346, in run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 295, in run_lambda_executor
2023-03-09 15:53:25 localstack-test | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 306, in do_run_lambda_executor
2023-03-09 15:53:25 localstack-test | localstack.services.awslambda.lambda_executors.InvocationException: Timeout - Lambda container did not report result after 3 secs
2023-03-09 15:53:25 localstack-test |
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.417 DEBUG --- [uest_thread)] l.s.awslambda.lambda_api : Lambda invocation duration: 3576.09ms
2023-03-09 15:53:25 localstack-test | 2023-03-09T20:53:25.418 INFO --- [ asgi_gw_2] localstack.request.aws : AWS lambda.Invoke => 200
|
priority
|
bug lambda works on intel but not is there an existing issue for this i have searched the existing issues current behavior we have a localstack setup that works with macbook with intel cpu but fails with lambda invoke response statuscode functionerror unhandled logresult executedversion latest lambda is a rust binary with provided runtime the binary is built with linux architectures cargo lambda lambda build if we specify the for the build and architectures in the cli create function command then the lambda fails on the intel mac too expected behavior invoking the lambda successfully succeeds lambda invoke response body headers content type text plain false multivalueheaders content type text plain statuscode how are you starting localstack with a docker compose file steps to reproduce recreatable code run sh environment markdown os with processor localstack localstack localstack pro latest os with intel processor anything else localstack test warn localstack aws accounts ignoring production aws credentials provided to localstack falling back to default account id localstack test debug l s awslambda lambda api running lambda arn aws lambda us east function test localstack test info l s a lambda executors running lambda arn aws lambda us east function test localstack test debug l s a lambda extended putting invocation event request id for lambda arn aws lambda us east function test to queue localstack test debug l s a lambda launcher executing docker separate execution hook for function arn aws lambda us east function test localstack test debug l u c container client getting the entrypoint for image localstack lambda provided localstack test debug l u c docker sdk client creating container with attributes mount volumes none ports cap add cap drop none security opt none dns additional flags workdir none privileged none labels none command bootstrap detach false entrypoint tmp sh env vars aws access key id test aws secret access key test aws region us east docker lambda use stdin localstack hostname aws endpoint url edge port handler bootstrap aws lambda function timeout aws lambda function name test aws lambda function version latest aws lambda function invoked arn arn aws lambda us east function test localstack debug aws lambda function memory size lambda runtime provided aws lambda log group name aws lambda test aws lambda log stream name test aws lambda runtime api test us east localhost localstack cloud localstack hosts entry test us east localhost localstack cloud image name localstack lambda provided interactive true name none network localstack test default platform none remove true self tty false user root localstack test debug l u c docker sdk client copying file tmp function zipfile into var task localstack test debug l u c docker sdk client copying file var lib localstack tmp sh into tmp sh localstack test debug l u c docker sdk client starting container localstack test debug l s a lambda extended waiting for lambda invocation result of request id localstack test error l s a lambda extended unable to invoke lambda arn aws lambda us east function test localstack test traceback most recent call last localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in do run lambda executor localstack test file usr local lib queue py line in get localstack test raise empty localstack test queue empty localstack test debug l s a lambda extended log output for invocation of lambda test localstack test debug l u c docker sdk client removing container localstack test info l s awslambda lambda api error executing lambda function arn aws lambda us east function test timeout lambda container did not report result after secs traceback most recent call last localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in do run lambda executor localstack test file usr local lib queue py line in get localstack test raise empty localstack test queue empty localstack test localstack test during handling of the above exception another exception occurred localstack test localstack test traceback most recent call last localstack test file opt code localstack localstack services awslambda lambda api py line in run lambda localstack test result lambda executor execute localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in execute local executor localstack test file opt code localstack localstack services awslambda lambda executors py line in execute localstack test return do execute localstack test file opt code localstack localstack services awslambda lambda executors py line in do execute localstack test return run func arn func arn localstack test file opt code localstack localstack utils cloudwatch cloudwatch util py line in wrapped localstack test raise e localstack test file opt code localstack localstack utils cloudwatch cloudwatch util py line in wrapped localstack test result func args kwargs localstack test file opt code localstack localstack services awslambda lambda executors py line in run localstack test raise e localstack test file opt code localstack localstack services awslambda lambda executors py line in run localstack test result self execute lambda function inv context localstack test file opt code localstack localstack services awslambda lambda executors py line in execute localstack test result self run lambda executor lambda function lambda function inv context inv context localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in run lambda executor localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in run lambda executor localstack test file opt code localstack venv lib site packages localstack ext services awslambda lambda extended py enc line in do run lambda executor localstack test localstack services awslambda lambda executors invocationexception timeout lambda container did not report result after secs localstack test localstack test debug l s awslambda lambda api lambda invocation duration localstack test info localstack request aws aws lambda invoke
| 1
|
6,255
| 2,586,543,848
|
IssuesEvent
|
2015-02-17 12:35:11
|
DOAJ/doaj
|
https://api.github.com/repos/DOAJ/doaj
|
closed
|
Clicking 'Edit this User' returns Not found error
|
data cleanup high priority
|
Data cleanup:
- clean up trailing and leading whitespace
In the meantime: we need to change the email address on this record: http://bit.ly/1hwcyor but clicking Edit this user returns an error: https://doaj.org/account/17582652
|
1.0
|
Clicking 'Edit this User' returns Not found error - Data cleanup:
- clean up trailing and leading whitespace
In the meantime: we need to change the email address on this record: http://bit.ly/1hwcyor but clicking Edit this user returns an error: https://doaj.org/account/17582652
|
priority
|
clicking edit this user returns not found error data cleanup clean up trailing and leading whitespace in the meantime we need to change the email address on this record but clicking edit this user returns an error
| 1
|
394,584
| 11,645,513,033
|
IssuesEvent
|
2020-03-01 02:02:58
|
CodetheChangeFoundation/UHN-Mobile-App
|
https://api.github.com/repos/CodetheChangeFoundation/UHN-Mobile-App
|
opened
|
[UHN-MA-54] Number of available responders on Using Screen
|
enhancement high priority logic
|
- Use /users/:id/responders/count
- setInterval
- clearInterval when page is unmounted
|
1.0
|
[UHN-MA-54] Number of available responders on Using Screen - - Use /users/:id/responders/count
- setInterval
- clearInterval when page is unmounted
|
priority
|
number of available responders on using screen use users id responders count setinterval clearinterval when page is unmounted
| 1
|
352,687
| 10,544,975,078
|
IssuesEvent
|
2019-10-02 18:06:53
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
encryption-at-rest cluster expand test: Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
|
area/docdb kind/bug priority/high
|
1) Created a 3-node YugabyteDB universe with encryption at rest enabled.
2) Loaded a bunch of data. Txn logs and SSTable files (storage files) all were encryted as expected.
3) Expanded universe from 3 to 6 nodes while workload was still running. Most tablets rebalanced to the new nodes.. but the balancing seemed to get stuck at some point.
Upon inspection, the cluster balance as seen from yb-master leader logs was waiting on this:
```
W0928 05:07:36.013736 24768 cluster_balance.cc:227] Skipping add replicas for 260b2de60d1b4b93b1ad7dbab31a1190: Operation failed. Try again. (yb/master/cluster_balance.cc:496): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1
W0928 05:07:36.013907 24768 cluster_balance.cc:227] Skipping add replicas for c412bfaf16b543c89da9c5898b2adf70: Operation failed. Try again. (yb/master/cluster_balance.cc:489): Cannot add replicas. Currently remote bootstrapping 4 tablets, when our max allowed is 2
```
But real cause seems of error seems to be this error message in yb-tserver logs on one of the new nodes:
```
I0928 05:18:59.487653 24528 tablet.cc:556] Opening RocksDB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489454 24528 db_impl.cc:401] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [R]: Shutting down RocksDB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489470 24528 db_impl.cc:439] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [R]: Pending 0 compactions and 0 flushes
E0928 05:18:59.489503 24528 tablet.cc:560] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Failed to open a RocksDB database in directory /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34: Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489540 24528 tablet_bootstrap.cc:420] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Time spent opening tablet: real 0.003s user 0.000s sys 0.001s
E0928 05:18:59.489579 24528 ts_tablet_manager.cc:1114] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tablet failed to bootstrap: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489596 24528 tablet_peer.cc:974] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=FAILED]: Changed state from BOOTSTRAPPING to FAILED
I0928 05:18:59.489605 24528 ts_tablet_manager.cc:1086] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Time spent bootstrapping tablet: real 0.003s user 0.000s sys 0.001s
I0928 05:18:59.489614 24528 tablet_peer.cc:335] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=FAILED]: Initiating TabletPeer shutdown
I0928 05:18:59.489619 24528 tablet_peer.cc:349] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=QUIESCING]: Started shutdown from state: FAILED
W0928 05:18:59.489629 24528 ts_tablet_manager.cc:1869] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Remote bootstrap: OpenTablet() failed: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489637 24528 ts_tablet_manager.cc:1872] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tombstoning tablet after failed remote bootstrap
I0928 05:18:59.489642 24528 ts_tablet_manager.cc:1830] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I0928 05:18:59.489656 24528 tablet_metadata.cc:385] Destroying regular db at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489832 24528 tablet_metadata.cc:391] Successfully destroyed regular DB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489981 24528 tablet_metadata.cc:402] Successfully destroyed provisional records DB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34.intents
I0928 05:18:59.494750 24528 ts_tablet_manager.cc:1840] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tablet deleted. Last logged OpId: { term: 0 index: 0 }
I0928 05:18:59.494773 24528 log.cc:1006] T 356fd22e59574e6f85c6773ab681aa34P 4318548741284f6cabb29fa70b07c2a9: Deleting WAL dir /mnt/d0/yb-data/tserver/wals/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.494849 24528 ts_tablet_manager.cc:1913] Deleted transition in progress remote bootstrapping tablet from peer 97b3b1e7151f4d62aa1d8e67f15e64ab for tablet 356fd22e59574e6f85c6773ab681aa34
W0928 05:18:59.494864 24528 tablet_service.cc:1751] Start remote bootstrap failed: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
```
|
1.0
|
encryption-at-rest cluster expand test: Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4 - 1) Created a 3-node YugabyteDB universe with encryption at rest enabled.
2) Loaded a bunch of data. Txn logs and SSTable files (storage files) all were encryted as expected.
3) Expanded universe from 3 to 6 nodes while workload was still running. Most tablets rebalanced to the new nodes.. but the balancing seemed to get stuck at some point.
Upon inspection, the cluster balance as seen from yb-master leader logs was waiting on this:
```
W0928 05:07:36.013736 24768 cluster_balance.cc:227] Skipping add replicas for 260b2de60d1b4b93b1ad7dbab31a1190: Operation failed. Try again. (yb/master/cluster_balance.cc:496): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1
W0928 05:07:36.013907 24768 cluster_balance.cc:227] Skipping add replicas for c412bfaf16b543c89da9c5898b2adf70: Operation failed. Try again. (yb/master/cluster_balance.cc:489): Cannot add replicas. Currently remote bootstrapping 4 tablets, when our max allowed is 2
```
But real cause seems of error seems to be this error message in yb-tserver logs on one of the new nodes:
```
I0928 05:18:59.487653 24528 tablet.cc:556] Opening RocksDB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489454 24528 db_impl.cc:401] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [R]: Shutting down RocksDB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489470 24528 db_impl.cc:439] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [R]: Pending 0 compactions and 0 flushes
E0928 05:18:59.489503 24528 tablet.cc:560] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Failed to open a RocksDB database in directory /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34: Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489540 24528 tablet_bootstrap.cc:420] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Time spent opening tablet: real 0.003s user 0.000s sys 0.001s
E0928 05:18:59.489579 24528 ts_tablet_manager.cc:1114] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tablet failed to bootstrap: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489596 24528 tablet_peer.cc:974] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=FAILED]: Changed state from BOOTSTRAPPING to FAILED
I0928 05:18:59.489605 24528 ts_tablet_manager.cc:1086] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Time spent bootstrapping tablet: real 0.003s user 0.000s sys 0.001s
I0928 05:18:59.489614 24528 tablet_peer.cc:335] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=FAILED]: Initiating TabletPeer shutdown
I0928 05:18:59.489619 24528 tablet_peer.cc:349] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9 [state=QUIESCING]: Started shutdown from state: FAILED
W0928 05:18:59.489629 24528 ts_tablet_manager.cc:1869] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Remote bootstrap: OpenTablet() failed: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
I0928 05:18:59.489637 24528 ts_tablet_manager.cc:1872] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tombstoning tablet after failed remote bootstrap
I0928 05:18:59.489642 24528 ts_tablet_manager.cc:1830] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I0928 05:18:59.489656 24528 tablet_metadata.cc:385] Destroying regular db at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489832 24528 tablet_metadata.cc:391] Successfully destroyed regular DB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.489981 24528 tablet_metadata.cc:402] Successfully destroyed provisional records DB at: /mnt/d0/yb-data/tserver/data/rocksdb/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34.intents
I0928 05:18:59.494750 24528 ts_tablet_manager.cc:1840] T 356fd22e59574e6f85c6773ab681aa34 P 4318548741284f6cabb29fa70b07c2a9: Tablet deleted. Last logged OpId: { term: 0 index: 0 }
I0928 05:18:59.494773 24528 log.cc:1006] T 356fd22e59574e6f85c6773ab681aa34P 4318548741284f6cabb29fa70b07c2a9: Deleting WAL dir /mnt/d0/yb-data/tserver/wals/table-c412bfaf16b543c89da9c5898b2adf70/tablet-356fd22e59574e6f85c6773ab681aa34
I0928 05:18:59.494849 24528 ts_tablet_manager.cc:1913] Deleted transition in progress remote bootstrapping tablet from peer 97b3b1e7151f4d62aa1d8e67f15e64ab for tablet 356fd22e59574e6f85c6773ab681aa34
W0928 05:18:59.494864 24528 tablet_service.cc:1751] Start remote bootstrap failed: Illegal state (yb/tablet/tablet.cc:565): Invalid argument (yb/tserver/header_manager_impl.cc:135): Error parsing field universe key id: expect 4 bytes found 4
```
|
priority
|
encryption at rest cluster expand test invalid argument yb tserver header manager impl cc error parsing field universe key id expect bytes found created a node yugabytedb universe with encryption at rest enabled loaded a bunch of data txn logs and sstable files storage files all were encryted as expected expanded universe from to nodes while workload was still running most tablets rebalanced to the new nodes but the balancing seemed to get stuck at some point upon inspection the cluster balance as seen from yb master leader logs was waiting on this cluster balance cc skipping add replicas for operation failed try again yb master cluster balance cc cannot add replicas currently have a total overreplication of when max allowed is cluster balance cc skipping add replicas for operation failed try again yb master cluster balance cc cannot add replicas currently remote bootstrapping tablets when our max allowed is but real cause seems of error seems to be this error message in yb tserver logs on one of the new nodes tablet cc opening rocksdb at mnt yb data tserver data rocksdb table tablet db impl cc t p shutting down rocksdb at mnt yb data tserver data rocksdb table tablet db impl cc t p pending compactions and flushes tablet cc t p failed to open a rocksdb database in directory mnt yb data tserver data rocksdb table tablet invalid argument yb tserver header manager impl cc error parsing field universe key id expect bytes found tablet bootstrap cc t p time spent opening tablet real user sys ts tablet manager cc t p tablet failed to bootstrap illegal state yb tablet tablet cc invalid argument yb tserver header manager impl cc error parsing field universe key id expect bytes found tablet peer cc t p changed state from bootstrapping to failed ts tablet manager cc t p time spent bootstrapping tablet real user sys tablet peer cc t p initiating tabletpeer shutdown tablet peer cc t p started shutdown from state failed ts tablet manager cc t p remote bootstrap opentablet failed illegal state yb tablet tablet cc invalid argument yb tserver header manager impl cc error parsing field universe key id expect bytes found ts tablet manager cc t p tombstoning tablet after failed remote bootstrap ts tablet manager cc t p deleting tablet data with delete state tablet data tombstoned tablet metadata cc destroying regular db at mnt yb data tserver data rocksdb table tablet tablet metadata cc successfully destroyed regular db at mnt yb data tserver data rocksdb table tablet tablet metadata cc successfully destroyed provisional records db at mnt yb data tserver data rocksdb table tablet intents ts tablet manager cc t p tablet deleted last logged opid term index log cc t deleting wal dir mnt yb data tserver wals table tablet ts tablet manager cc deleted transition in progress remote bootstrapping tablet from peer for tablet tablet service cc start remote bootstrap failed illegal state yb tablet tablet cc invalid argument yb tserver header manager impl cc error parsing field universe key id expect bytes found
| 1
|
361,064
| 10,703,678,967
|
IssuesEvent
|
2019-10-24 10:02:25
|
Monika-After-Story/MonikaModDev
|
https://api.github.com/repos/Monika-After-Story/MonikaModDev
|
opened
|
dont add unrecognizable gifts to the daily reacted map
|
enhancement high priority
|
as suggested by @jmwall24 , if a gift is unrecognized, just let it be reactable again. this means users who failed to install spritepacks the first time dont have to wait a full day.
|
1.0
|
dont add unrecognizable gifts to the daily reacted map - as suggested by @jmwall24 , if a gift is unrecognized, just let it be reactable again. this means users who failed to install spritepacks the first time dont have to wait a full day.
|
priority
|
dont add unrecognizable gifts to the daily reacted map as suggested by if a gift is unrecognized just let it be reactable again this means users who failed to install spritepacks the first time dont have to wait a full day
| 1
|
674,483
| 23,052,545,300
|
IssuesEvent
|
2022-07-24 21:05:12
|
mfem/mfem
|
https://api.github.com/repos/mfem/mfem
|
closed
|
hang in hypre_MatvecCommPkgCreate()
|
linalg hpc high-priority stale
|
I've posted this on the Hypre Github page, but I figured it should be here as well, since I'm using the MFEM HypreParMatrix interface.
With Hypre 2.24.0 and MFEM 4.4, I'm trying to construct a rectangular HypreParMatrix using the generic CSR interface in MFEM (https://github.com/mfem/mfem/blob/fef1928708bb455f46aa611ddf73a4fd3d1c1974/linalg/hypre.cpp#L1176), but the code hangs indefinitely upon calling hypre_MatvecCommPkgCreate() when running on more than one process (https://github.com/mfem/mfem/blob/fef1928708bb455f46aa611ddf73a4fd3d1c1974/linalg/hypre.cpp#L1312). I was able to print out the matrices right before, and have attached the files for a 2 processor decomposition here.
[hypre_ParCSRMatrix.00000.txt](https://github.com/mfem/mfem/files/8565080/hypre_ParCSRMatrix.00000.txt)
[hypre_ParCSRMatrix.00001.txt](https://github.com/mfem/mfem/files/8565081/hypre_ParCSRMatrix.00001.txt)
I didn't see this interface being exercised in any of the examples/tests. Should we be using another interface for general rectangular matrices?
For context, it occurs when trying to construct the constraint matrices which I'm giving to the SchurConstrainedHypreSolver class. On a single process, the problem runs as expected, but on two or more processes, it hangs (as described above).
|
1.0
|
hang in hypre_MatvecCommPkgCreate() - I've posted this on the Hypre Github page, but I figured it should be here as well, since I'm using the MFEM HypreParMatrix interface.
With Hypre 2.24.0 and MFEM 4.4, I'm trying to construct a rectangular HypreParMatrix using the generic CSR interface in MFEM (https://github.com/mfem/mfem/blob/fef1928708bb455f46aa611ddf73a4fd3d1c1974/linalg/hypre.cpp#L1176), but the code hangs indefinitely upon calling hypre_MatvecCommPkgCreate() when running on more than one process (https://github.com/mfem/mfem/blob/fef1928708bb455f46aa611ddf73a4fd3d1c1974/linalg/hypre.cpp#L1312). I was able to print out the matrices right before, and have attached the files for a 2 processor decomposition here.
[hypre_ParCSRMatrix.00000.txt](https://github.com/mfem/mfem/files/8565080/hypre_ParCSRMatrix.00000.txt)
[hypre_ParCSRMatrix.00001.txt](https://github.com/mfem/mfem/files/8565081/hypre_ParCSRMatrix.00001.txt)
I didn't see this interface being exercised in any of the examples/tests. Should we be using another interface for general rectangular matrices?
For context, it occurs when trying to construct the constraint matrices which I'm giving to the SchurConstrainedHypreSolver class. On a single process, the problem runs as expected, but on two or more processes, it hangs (as described above).
|
priority
|
hang in hypre matveccommpkgcreate i ve posted this on the hypre github page but i figured it should be here as well since i m using the mfem hypreparmatrix interface with hypre and mfem i m trying to construct a rectangular hypreparmatrix using the generic csr interface in mfem but the code hangs indefinitely upon calling hypre matveccommpkgcreate when running on more than one process i was able to print out the matrices right before and have attached the files for a processor decomposition here i didn t see this interface being exercised in any of the examples tests should we be using another interface for general rectangular matrices for context it occurs when trying to construct the constraint matrices which i m giving to the schurconstrainedhypresolver class on a single process the problem runs as expected but on two or more processes it hangs as described above
| 1
|
636,493
| 20,601,691,826
|
IssuesEvent
|
2022-03-06 11:10:38
|
adirh3/Fluent-Search
|
https://api.github.com/repos/adirh3/Fluent-Search
|
closed
|
FS does not find files on networked drives at startup.
|
bug High Priority
|
.9999 stable. When FS is launched at startup, files from O:\ network share drive are not indexed by FS. When re-launched, it finds all of the relevant files.
|
1.0
|
FS does not find files on networked drives at startup. - .9999 stable. When FS is launched at startup, files from O:\ network share drive are not indexed by FS. When re-launched, it finds all of the relevant files.
|
priority
|
fs does not find files on networked drives at startup stable when fs is launched at startup files from o network share drive are not indexed by fs when re launched it finds all of the relevant files
| 1
|
678,523
| 23,200,720,342
|
IssuesEvent
|
2022-08-01 21:09:19
|
SeekyCt/ppcdis
|
https://api.github.com/repos/SeekyCt/ppcdis
|
closed
|
Auto forceactive symbols in dol referenced by rel
|
enhancement high priority
|
Symbols referenced by external binaries should be forced active in the LCF for a binary automatically since the binary may not reference them itself. This will need:
- [x] Extra labels reworked to give a file with only the labels in other binaries
- [x] An LCF preprocessor
|
1.0
|
Auto forceactive symbols in dol referenced by rel - Symbols referenced by external binaries should be forced active in the LCF for a binary automatically since the binary may not reference them itself. This will need:
- [x] Extra labels reworked to give a file with only the labels in other binaries
- [x] An LCF preprocessor
|
priority
|
auto forceactive symbols in dol referenced by rel symbols referenced by external binaries should be forced active in the lcf for a binary automatically since the binary may not reference them itself this will need extra labels reworked to give a file with only the labels in other binaries an lcf preprocessor
| 1
|
770,497
| 27,042,251,473
|
IssuesEvent
|
2023-02-13 06:47:56
|
sunpy/sunkit-image
|
https://api.github.com/repos/sunpy/sunkit-image
|
reopened
|
Declass ASDA
|
Effort High Feature Request Package Novice Priority Low
|
When pull request #40 is merged in, it would be worthwhile to declass the code into a series of functions.
|
1.0
|
Declass ASDA - When pull request #40 is merged in, it would be worthwhile to declass the code into a series of functions.
|
priority
|
declass asda when pull request is merged in it would be worthwhile to declass the code into a series of functions
| 1
|
649,384
| 21,299,577,653
|
IssuesEvent
|
2022-04-15 00:02:47
|
NUS-PocketShop/PocketShop
|
https://api.github.com/repos/NUS-PocketShop/PocketShop
|
closed
|
Implement Locations
|
sprint3 priority.High
|
Shop:
- [x] Add locations to shop
- [x] Edit locations
- [x] Display location on all shops (and their subviews)
Customer:
- [x] View shops by location (ScrollView)
- [x] Display location on all products (and their subviews)
|
1.0
|
Implement Locations - Shop:
- [x] Add locations to shop
- [x] Edit locations
- [x] Display location on all shops (and their subviews)
Customer:
- [x] View shops by location (ScrollView)
- [x] Display location on all products (and their subviews)
|
priority
|
implement locations shop add locations to shop edit locations display location on all shops and their subviews customer view shops by location scrollview display location on all products and their subviews
| 1
|
373,740
| 11,048,181,827
|
IssuesEvent
|
2019-12-09 20:35:44
|
robotframework/SeleniumLibrary
|
https://api.github.com/repos/robotframework/SeleniumLibrary
|
closed
|
When EventFiringWebDriver is enabled, setting selenium speed triggers exception
|
bug priority: high
|
## Steps to reproduce the issue
* use SeleniumTestability ;) for example to enable EventFiringWebDriver.
* call Set Selenium Speed
## Error messages and additional information
'WebDriver' object has no attribute '_base_execute'
## Expected behavior and actual behavior
No exceptions should be thrown
|
1.0
|
When EventFiringWebDriver is enabled, setting selenium speed triggers exception - ## Steps to reproduce the issue
* use SeleniumTestability ;) for example to enable EventFiringWebDriver.
* call Set Selenium Speed
## Error messages and additional information
'WebDriver' object has no attribute '_base_execute'
## Expected behavior and actual behavior
No exceptions should be thrown
|
priority
|
when eventfiringwebdriver is enabled setting selenium speed triggers exception steps to reproduce the issue use seleniumtestability for example to enable eventfiringwebdriver call set selenium speed error messages and additional information webdriver object has no attribute base execute expected behavior and actual behavior no exceptions should be thrown
| 1
|
155,997
| 5,962,855,279
|
IssuesEvent
|
2017-05-30 01:19:36
|
WazeUSA/WME-Place-Harmonizer
|
https://api.github.com/repos/WazeUSA/WME-Place-Harmonizer
|
closed
|
Remove auto-run feature
|
Enhancement Priority: High
|
Not very useful, and can cause trouble if properties are changed when user isn't paying close attention.
|
1.0
|
Remove auto-run feature - Not very useful, and can cause trouble if properties are changed when user isn't paying close attention.
|
priority
|
remove auto run feature not very useful and can cause trouble if properties are changed when user isn t paying close attention
| 1
|
644,747
| 20,986,762,675
|
IssuesEvent
|
2022-03-29 04:37:31
|
yukiHaga/regex-hunting
|
https://api.github.com/repos/yukiHaga/regex-hunting
|
opened
|
本番環境でGithubを用いた新規会員登録できない問題を解決する
|
Priority: high
|
## 概要
人によっては、Githubログインができていない。
- Yuiさんはできている
- Yanoさんとアニキはできていない。
全ての人がGtihubログインできるようにする
## やること
- [ ] Mimata氏のsorceryのコードと自分のコードを見比べて修正する
- [ ] もし、Githubを用いた新規会員登録が失敗した場合、エラーのフラッシュメッセージを出してLPページに戻るようにする。
## 受け入れ条件
- [ ] 全ての人が、Githubを用いて新規会員登録できる
## 参考記事
- [arrangyのGithub](https://github.com/kazu-2020/arrangy)
|
1.0
|
本番環境でGithubを用いた新規会員登録できない問題を解決する - ## 概要
人によっては、Githubログインができていない。
- Yuiさんはできている
- Yanoさんとアニキはできていない。
全ての人がGtihubログインできるようにする
## やること
- [ ] Mimata氏のsorceryのコードと自分のコードを見比べて修正する
- [ ] もし、Githubを用いた新規会員登録が失敗した場合、エラーのフラッシュメッセージを出してLPページに戻るようにする。
## 受け入れ条件
- [ ] 全ての人が、Githubを用いて新規会員登録できる
## 参考記事
- [arrangyのGithub](https://github.com/kazu-2020/arrangy)
|
priority
|
本番環境でgithubを用いた新規会員登録できない問題を解決する 概要 人によっては、githubログインができていない。 yuiさんはできている yanoさんとアニキはできていない。 全ての人がgtihubログインできるようにする やること mimata氏のsorceryのコードと自分のコードを見比べて修正する もし、githubを用いた新規会員登録が失敗した場合、エラーのフラッシュメッセージを出してlpページに戻るようにする。 受け入れ条件 全ての人が、githubを用いて新規会員登録できる 参考記事
| 1
|
335,870
| 10,167,723,471
|
IssuesEvent
|
2019-08-07 18:56:21
|
onaio/reveal-frontend
|
https://api.github.com/repos/onaio/reveal-frontend
|
closed
|
IRS Planning - Save Plan as a Draft
|
Priority: High has pr
|
While planning the user needs to be capable of Saving a Plan as a non-finalized draft.
#### Dependencies
- ~~Non-client Store for the Plan definition data - @moshthepitt @samwata @craigappl let's discuss this point asap~~
#### Requirements
- [x] On all Saves, render feedback indicating Save success/fail
- [ ] On first Save, the webapp waits for server confirmation before navigating to `draft`
- [x] On first Save, the new Plan will be rendered when navigating to `/irs`
#### Interactions
- [ ] Clicking the `Save as a draft` button:
- [x] Plan configuration is POST/PUT to OpenSRP or a Store
- [ ] If saving from `create`: navigate to `/irs/draft/<plan-id>`
- [x] Else: don't navigate
|
1.0
|
IRS Planning - Save Plan as a Draft - While planning the user needs to be capable of Saving a Plan as a non-finalized draft.
#### Dependencies
- ~~Non-client Store for the Plan definition data - @moshthepitt @samwata @craigappl let's discuss this point asap~~
#### Requirements
- [x] On all Saves, render feedback indicating Save success/fail
- [ ] On first Save, the webapp waits for server confirmation before navigating to `draft`
- [x] On first Save, the new Plan will be rendered when navigating to `/irs`
#### Interactions
- [ ] Clicking the `Save as a draft` button:
- [x] Plan configuration is POST/PUT to OpenSRP or a Store
- [ ] If saving from `create`: navigate to `/irs/draft/<plan-id>`
- [x] Else: don't navigate
|
priority
|
irs planning save plan as a draft while planning the user needs to be capable of saving a plan as a non finalized draft dependencies non client store for the plan definition data moshthepitt samwata craigappl let s discuss this point asap requirements on all saves render feedback indicating save success fail on first save the webapp waits for server confirmation before navigating to draft on first save the new plan will be rendered when navigating to irs interactions clicking the save as a draft button plan configuration is post put to opensrp or a store if saving from create navigate to irs draft else don t navigate
| 1
|
175,255
| 6,548,571,522
|
IssuesEvent
|
2017-09-04 23:30:06
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[craftercms] MS Windows delivery doesn't render out of the box
|
bug Priority: Highest!
|
Per our convo.
Steps to Reproduce
===============
* Create a site using Editorial BP
* Use init-site.bat to create the site in Delivery
* Validate the content is on disk
* Hit delivery on 9080 with `crafterSite=`
* You'll get no site is set
|
1.0
|
[craftercms] MS Windows delivery doesn't render out of the box - Per our convo.
Steps to Reproduce
===============
* Create a site using Editorial BP
* Use init-site.bat to create the site in Delivery
* Validate the content is on disk
* Hit delivery on 9080 with `crafterSite=`
* You'll get no site is set
|
priority
|
ms windows delivery doesn t render out of the box per our convo steps to reproduce create a site using editorial bp use init site bat to create the site in delivery validate the content is on disk hit delivery on with craftersite you ll get no site is set
| 1
|
315,300
| 9,608,774,819
|
IssuesEvent
|
2019-05-12 09:42:26
|
aau-giraf/weekplanner
|
https://api.github.com/repos/aau-giraf/weekplanner
|
closed
|
As a citizen I want a time timer so that I know how long is left of my current activity
|
priority: highest type: feature
|
I would like it to look like https://spektrumshop.dk/shop/time-timer-medium-187p.html
|
1.0
|
As a citizen I want a time timer so that I know how long is left of my current activity - I would like it to look like https://spektrumshop.dk/shop/time-timer-medium-187p.html
|
priority
|
as a citizen i want a time timer so that i know how long is left of my current activity i would like it to look like
| 1
|
225,951
| 7,496,580,317
|
IssuesEvent
|
2018-04-08 10:57:32
|
dbriemann/glyph
|
https://api.github.com/repos/dbriemann/glyph
|
opened
|
Add basic CLI options.
|
priority:high type:feature
|
Options needed:
- help
- build
- clean
- version
- new? (this would clone the glyph-zero repo)
- etc...
|
1.0
|
Add basic CLI options. - Options needed:
- help
- build
- clean
- version
- new? (this would clone the glyph-zero repo)
- etc...
|
priority
|
add basic cli options options needed help build clean version new this would clone the glyph zero repo etc
| 1
|
606,819
| 18,768,864,562
|
IssuesEvent
|
2021-11-06 12:58:09
|
AY2122S1-CS2103-F10-1/tp
|
https://api.github.com/repos/AY2122S1-CS2103-F10-1/tp
|
closed
|
[PE-D] Edit-applicant does not work with github
|
type.Bug priority.High
|

<!--session: 1635494326727-c759dd85-3d3b-4195-98ff-13889c61f266-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.FunctionalityBug`
original: JoelChanZhiYang/ped#5
|
1.0
|
[PE-D] Edit-applicant does not work with github - 
<!--session: 1635494326727-c759dd85-3d3b-4195-98ff-13889c61f266-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.FunctionalityBug`
original: JoelChanZhiYang/ped#5
|
priority
|
edit applicant does not work with github labels severity high type functionalitybug original joelchanzhiyang ped
| 1
|
303,685
| 9,309,573,291
|
IssuesEvent
|
2019-03-25 16:46:06
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Changing the locale of the email template from email-admin-config.xml didn't have effect
|
Affected/5.4.0-Update1 Priority/Highest Resolution/Not a bug Severity/Critical
|
Tried the following, but when the mail is sent the default locale is picked
```
<configuration type="adminforcedpasswordreset" display="AdminForcedPasswordReset" locale="fr_FR" emailContentType="text/html">
<subject>WSO2 - Réinitialisation forcée du mot de passe administrateur</subject>
<body><![CDATA[<table align="center" cellpadding="0" cellspacing="0" border="0" width="100%"bgcolor="#f0f0f0">
<tr>
<td style="padding: 30px 30px 20px 30px;">
<table cellpadding="0" cellspacing="0" border="0" width="100%" bgcolor="#ffffff" style="max-width: 650px; margin: auto;">
<tr>
<td colspan="2" align="center" style="background-color: #333; padding: 40px;">
<a href="http://wso2.com/" target="_blank"><img src="http://cdn.wso2.com/wso2/newsletter/images/nl-2017/wso2-logo-transparent.png" border="0" /></a>
</td>
</tr>
<tr>
<td colspan="2" align="center" style="padding: 50px 50px 0px 50px;">
<h1 style="padding-right: 0em; margin: 0; line-height: 40px; font-weight:300; font-family: 'Nunito Sans', Arial, Verdana, Helvetica, sans-serif; color: #666; text-align: left; padding-bottom: 1em;">
Réinitialisation du mot de passe administrateur
</h1>
</td>
</tr>
<tr>
...
...
...
```
|
1.0
|
Changing the locale of the email template from email-admin-config.xml didn't have effect - Tried the following, but when the mail is sent the default locale is picked
```
<configuration type="adminforcedpasswordreset" display="AdminForcedPasswordReset" locale="fr_FR" emailContentType="text/html">
<subject>WSO2 - Réinitialisation forcée du mot de passe administrateur</subject>
<body><![CDATA[<table align="center" cellpadding="0" cellspacing="0" border="0" width="100%"bgcolor="#f0f0f0">
<tr>
<td style="padding: 30px 30px 20px 30px;">
<table cellpadding="0" cellspacing="0" border="0" width="100%" bgcolor="#ffffff" style="max-width: 650px; margin: auto;">
<tr>
<td colspan="2" align="center" style="background-color: #333; padding: 40px;">
<a href="http://wso2.com/" target="_blank"><img src="http://cdn.wso2.com/wso2/newsletter/images/nl-2017/wso2-logo-transparent.png" border="0" /></a>
</td>
</tr>
<tr>
<td colspan="2" align="center" style="padding: 50px 50px 0px 50px;">
<h1 style="padding-right: 0em; margin: 0; line-height: 40px; font-weight:300; font-family: 'Nunito Sans', Arial, Verdana, Helvetica, sans-serif; color: #666; text-align: left; padding-bottom: 1em;">
Réinitialisation du mot de passe administrateur
</h1>
</td>
</tr>
<tr>
...
...
...
```
|
priority
|
changing the locale of the email template from email admin config xml didn t have effect tried the following but when the mail is sent the default locale is picked réinitialisation forcée du mot de passe administrateur réinitialisation du mot de passe administrateur
| 1
|
105,375
| 4,234,747,272
|
IssuesEvent
|
2016-07-05 13:07:59
|
Metaswitch/crest
|
https://api.github.com/repos/Metaswitch/crest
|
closed
|
Homer doesn't start properly on all-in-one images
|
bug high-priority
|
#### Symptoms
Start an all-in-one image, log in through Ellis and create an account. The web account creates successfully, but the SIP account fails. It appears that Homer has locked up.
After I restart Homer, everything works fine. Interestingly, it also creates a log file dated an hour earlier than the previous one (and correct according to UTC) - maybe this is somehow related to DST?
#### Impact
Homer locks up until it's manually restarted. Can't create subscribers through Ellis or provide call service through Sprout.
#### Release and environment
Release-98, All-in-One Image on VirtualBox.
#### Steps to reproduce
Start up and All-in-One image. Seems 100% reproducible at present.
|
1.0
|
Homer doesn't start properly on all-in-one images - #### Symptoms
Start an all-in-one image, log in through Ellis and create an account. The web account creates successfully, but the SIP account fails. It appears that Homer has locked up.
After I restart Homer, everything works fine. Interestingly, it also creates a log file dated an hour earlier than the previous one (and correct according to UTC) - maybe this is somehow related to DST?
#### Impact
Homer locks up until it's manually restarted. Can't create subscribers through Ellis or provide call service through Sprout.
#### Release and environment
Release-98, All-in-One Image on VirtualBox.
#### Steps to reproduce
Start up and All-in-One image. Seems 100% reproducible at present.
|
priority
|
homer doesn t start properly on all in one images symptoms start an all in one image log in through ellis and create an account the web account creates successfully but the sip account fails it appears that homer has locked up after i restart homer everything works fine interestingly it also creates a log file dated an hour earlier than the previous one and correct according to utc maybe this is somehow related to dst impact homer locks up until it s manually restarted can t create subscribers through ellis or provide call service through sprout release and environment release all in one image on virtualbox steps to reproduce start up and all in one image seems reproducible at present
| 1
|
197,761
| 6,963,613,357
|
IssuesEvent
|
2017-12-08 18:06:32
|
explorers-nation/SeppOcrClient
|
https://api.github.com/repos/explorers-nation/SeppOcrClient
|
opened
|
Implement Journal Scraping
|
very-high-priority
|
When the app is open flying to a SEPP system should cause that system to get selected in the app, the data to get filled out and sent to the server automagically.
|
1.0
|
Implement Journal Scraping - When the app is open flying to a SEPP system should cause that system to get selected in the app, the data to get filled out and sent to the server automagically.
|
priority
|
implement journal scraping when the app is open flying to a sepp system should cause that system to get selected in the app the data to get filled out and sent to the server automagically
| 1
|
177,370
| 6,582,832,142
|
IssuesEvent
|
2017-09-13 01:20:35
|
HAS-CRM/IssueTracker
|
https://api.github.com/repos/HAS-CRM/IssueTracker
|
closed
|
Prepare UAT Server for Vietnam Testing
|
Priority.High Status.Done Type.ChangeRequest
|
Background:
Irene will want to let user at Vietnam try out at CRM Test Server.
Tested:
- [x] Account : Using HAS-VN User, Create account
- [x] Opportunity : Tested
- [x] Approval Flow : Tested
- [x] Quotation : Tested
- [x] Statistic : Tested
|
1.0
|
Prepare UAT Server for Vietnam Testing - Background:
Irene will want to let user at Vietnam try out at CRM Test Server.
Tested:
- [x] Account : Using HAS-VN User, Create account
- [x] Opportunity : Tested
- [x] Approval Flow : Tested
- [x] Quotation : Tested
- [x] Statistic : Tested
|
priority
|
prepare uat server for vietnam testing background irene will want to let user at vietnam try out at crm test server tested account using has vn user create account opportunity tested approval flow tested quotation tested statistic tested
| 1
|
304,628
| 9,334,294,711
|
IssuesEvent
|
2019-03-28 16:01:50
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
Simulator logic invalidates ALL discrete events in a diagram.
|
priority: high team: dynamics type: performance
|
What happens is that when the simulator attempts to handle a single discrete update, all discrete value cache entries in the diagram being simulated get invalidated.
The problematic line of code in the simulator was identified to be within `Simulator<T>::HandleDiscreteUpdate()`, specifically in [this line](https://github.com/RobotLocomotion/drake/blob/109d0564603faf02c95c9e51ac670ab77923414a/systems/analysis/simulator.h#L835), which at the very top level of a `Diagram` invalidates ALL discrete values even if they do not need to be updated.
Relates to #10860. (in that case external publish events at different frequencies is mistakenly invalidating the expensive to compute contact results).
|
1.0
|
Simulator logic invalidates ALL discrete events in a diagram. - What happens is that when the simulator attempts to handle a single discrete update, all discrete value cache entries in the diagram being simulated get invalidated.
The problematic line of code in the simulator was identified to be within `Simulator<T>::HandleDiscreteUpdate()`, specifically in [this line](https://github.com/RobotLocomotion/drake/blob/109d0564603faf02c95c9e51ac670ab77923414a/systems/analysis/simulator.h#L835), which at the very top level of a `Diagram` invalidates ALL discrete values even if they do not need to be updated.
Relates to #10860. (in that case external publish events at different frequencies is mistakenly invalidating the expensive to compute contact results).
|
priority
|
simulator logic invalidates all discrete events in a diagram what happens is that when the simulator attempts to handle a single discrete update all discrete value cache entries in the diagram being simulated get invalidated the problematic line of code in the simulator was identified to be within simulator handlediscreteupdate specifically in which at the very top level of a diagram invalidates all discrete values even if they do not need to be updated relates to in that case external publish events at different frequencies is mistakenly invalidating the expensive to compute contact results
| 1
|
503,561
| 14,594,384,249
|
IssuesEvent
|
2020-12-20 05:21:32
|
myConsciousness/entity-validator
|
https://api.github.com/repos/myConsciousness/entity-validator
|
closed
|
NestedEntityStrategyにコレクションの場合の処理を追加
|
Priority: high Type: new feature
|
# Add New Feature
## 1. Feature details
`NestedEntityStrategy` にコレクションの場合の処理を追加する。
## 2. Why it is necessary
既存機能の拡張に伴う修正。
## 3. How to implement
`NestedEntityStrategy` にコレクションの場合の処理を追加する。
## 4. References
|
1.0
|
NestedEntityStrategyにコレクションの場合の処理を追加 - # Add New Feature
## 1. Feature details
`NestedEntityStrategy` にコレクションの場合の処理を追加する。
## 2. Why it is necessary
既存機能の拡張に伴う修正。
## 3. How to implement
`NestedEntityStrategy` にコレクションの場合の処理を追加する。
## 4. References
|
priority
|
nestedentitystrategyにコレクションの場合の処理を追加 add new feature feature details nestedentitystrategy にコレクションの場合の処理を追加する。 why it is necessary 既存機能の拡張に伴う修正。 how to implement nestedentitystrategy にコレクションの場合の処理を追加する。 references
| 1
|
239,674
| 7,799,917,296
|
IssuesEvent
|
2018-06-09 02:05:02
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0006108:
add responsible (related) contact to grid columns
|
Crm Mantis high priority
|
**Reported by pschuele on 21 Mar 2012 09:22**
**Version:** Milan (2012-03-1)
add responsible (related) contact to grid columns
|
1.0
|
0006108:
add responsible (related) contact to grid columns - **Reported by pschuele on 21 Mar 2012 09:22**
**Version:** Milan (2012-03-1)
add responsible (related) contact to grid columns
|
priority
|
add responsible related contact to grid columns reported by pschuele on mar version milan add responsible related contact to grid columns
| 1
|
535,984
| 15,702,297,428
|
IssuesEvent
|
2021-03-26 12:25:43
|
rwth-afu/UniPager
|
https://api.github.com/repos/rwth-afu/UniPager
|
closed
|
RabbitMQ-Version: No reconnect if RabbiMQ-Server is restarted
|
Priority: High Type: Bug
|
Am 11.6.19, nach einem vermutlichen Neustart des RabbitMQ-Servers wegen Telemetry-Updates verbinden sich die zuvor verbundenen Sender nicht neu.

Dafür 100% CPU Last durch unipager.
Webinterface zeigt weiterhin "verbunden" an.
|
1.0
|
RabbitMQ-Version: No reconnect if RabbiMQ-Server is restarted - Am 11.6.19, nach einem vermutlichen Neustart des RabbitMQ-Servers wegen Telemetry-Updates verbinden sich die zuvor verbundenen Sender nicht neu.

Dafür 100% CPU Last durch unipager.
Webinterface zeigt weiterhin "verbunden" an.
|
priority
|
rabbitmq version no reconnect if rabbimq server is restarted am nach einem vermutlichen neustart des rabbitmq servers wegen telemetry updates verbinden sich die zuvor verbundenen sender nicht neu dafür cpu last durch unipager webinterface zeigt weiterhin verbunden an
| 1
|
802,689
| 29,044,237,356
|
IssuesEvent
|
2023-05-13 10:48:13
|
karel-brinda/mof-search
|
https://api.github.com/repos/karel-brinda/mof-search
|
closed
|
Verifying memory consumption measurements
|
high-priority paper
|
Our benchmarks report relatively low memory requirements even situations when I would expect more.
When I ran recently the plasmid DB experiment, with `max_ram_gb: 25`, I got the following memory results:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/benchmarks/match_2022_12_01T16_30_11.txt
i.e., mem 11415960 kb = 11.4 GB.
When I looked at the memory consumption of individual COBS instances, and compared it to the highest number among them, I got exactly the same number:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/benchmarks/run_cobs/pseudomonas_aeruginosa__01____all_ebi_plasmids___reads_1___reads_2___reads_3___reads_4.txt
11415960 kb.
Is it even theoretically possible that the Snakemake management wouldn’t increase the memory consumption?
@leoisl Do you have any possible explanation of this?
The conf diff is here:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/config.yaml.diff
The full conf is here:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/config.yaml
|
1.0
|
Verifying memory consumption measurements - Our benchmarks report relatively low memory requirements even situations when I would expect more.
When I ran recently the plasmid DB experiment, with `max_ram_gb: 25`, I got the following memory results:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/benchmarks/match_2022_12_01T16_30_11.txt
i.e., mem 11415960 kb = 11.4 GB.
When I looked at the memory consumption of individual COBS instances, and compared it to the highest number among them, I got exactly the same number:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/benchmarks/run_cobs/pseudomonas_aeruginosa__01____all_ebi_plasmids___reads_1___reads_2___reads_3___reads_4.txt
11415960 kb.
Is it even theoretically possible that the Snakemake management wouldn’t increase the memory consumption?
@leoisl Do you have any possible explanation of this?
The conf diff is here:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/config.yaml.diff
The full conf is here:
https://github.com/karel-brinda/mof-experiments/blob/7703f62bbb4c04383ed430024617c35fe9ece941/experiments/A60_mof_search_experiments_C/c23_ebiplasmids_2022_12_01__memstream_withcobs_filter_autothr/config.yaml
|
priority
|
verifying memory consumption measurements our benchmarks report relatively low memory requirements even situations when i would expect more when i ran recently the plasmid db experiment with max ram gb i got the following memory results i e mem kb gb when i looked at the memory consumption of individual cobs instances and compared it to the highest number among them i got exactly the same number kb is it even theoretically possible that the snakemake management wouldn’t increase the memory consumption leoisl do you have any possible explanation of this the conf diff is here the full conf is here
| 1
|
308,848
| 9,458,381,541
|
IssuesEvent
|
2019-04-17 04:59:43
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
[Doc] Support for renew refresh token property to be configurable per application. IS-5.7.0
|
Complexity/Medium Priority/High Severity/Blocker Type/Docs
|
We need to include documentation on how to configure/use this new feature. Below attachment includes a sample documentation.
[Instructions.docx](https://github.com/wso2/product-is/files/3006781/Instructions.docx)
improvement for: wso2/product-is#4550
|
1.0
|
[Doc] Support for renew refresh token property to be configurable per application. IS-5.7.0 - We need to include documentation on how to configure/use this new feature. Below attachment includes a sample documentation.
[Instructions.docx](https://github.com/wso2/product-is/files/3006781/Instructions.docx)
improvement for: wso2/product-is#4550
|
priority
|
support for renew refresh token property to be configurable per application is we need to include documentation on how to configure use this new feature below attachment includes a sample documentation improvement for product is
| 1
|
785,157
| 27,601,799,049
|
IssuesEvent
|
2023-03-09 10:26:07
|
sebastien-d-me/SebBlog
|
https://api.github.com/repos/sebastien-d-me/SebBlog
|
opened
|
Comments CRUD
|
Priority: High Statut: Not started Type : Front-end Type : Back-end
|
#### Description:
Creation of a CRUD for the comments.
------------
###### Estimated time: 5 day(s)
###### Difficulty: ⭐⭐⭐
|
1.0
|
Comments CRUD - #### Description:
Creation of a CRUD for the comments.
------------
###### Estimated time: 5 day(s)
###### Difficulty: ⭐⭐⭐
|
priority
|
comments crud description creation of a crud for the comments estimated time day s difficulty ⭐⭐⭐
| 1
|
145,783
| 5,581,623,467
|
IssuesEvent
|
2017-03-28 19:18:14
|
CS2103JAN2017-T09-B4/main
|
https://api.github.com/repos/CS2103JAN2017-T09-B4/main
|
closed
|
Remove all completed tasks from filters except the list completed filter
|
priority.high status.ongoing type.enhancement
|
remove from list timed / floating
how about find?
|
1.0
|
Remove all completed tasks from filters except the list completed filter - remove from list timed / floating
how about find?
|
priority
|
remove all completed tasks from filters except the list completed filter remove from list timed floating how about find
| 1
|
801,873
| 28,505,672,789
|
IssuesEvent
|
2023-04-18 21:10:36
|
WordPress/Learn
|
https://api.github.com/repos/WordPress/Learn
|
closed
|
Prioritise content based on site locale switcher
|
[Type] Enhancement [Component] Learn Plugin [Priority] High
|
Content for all content types (courses, lesson plans, and tutorials) should give extra weight to content that is in the same language as what is selected in the global locale switcher. This first needs the language meta field to be added to lesson plans, courses and meetings.
|
1.0
|
Prioritise content based on site locale switcher - Content for all content types (courses, lesson plans, and tutorials) should give extra weight to content that is in the same language as what is selected in the global locale switcher. This first needs the language meta field to be added to lesson plans, courses and meetings.
|
priority
|
prioritise content based on site locale switcher content for all content types courses lesson plans and tutorials should give extra weight to content that is in the same language as what is selected in the global locale switcher this first needs the language meta field to be added to lesson plans courses and meetings
| 1
|
227,451
| 7,533,619,920
|
IssuesEvent
|
2018-04-16 03:37:56
|
phetsims/vegas
|
https://api.github.com/repos/phetsims/vegas
|
closed
|
status bar change requests
|
dev:enhancement priority:2-high
|
Change requests from 4/12/18 design meeting with @amanda-phet @arouinfar @kathy-phet @jonathanolson @ariel-phet:
- [x] add a 4th score display, "Score: ⭐⭐⭐⭐" (decorate ScoreDisplayDiscreteStars)
- [x] add scoreDisplay option to ScoreboardBar (old bar)
- [x] factor out a common bar that is used by both ScoreboardBar and StatusBar
- [x] possibly rename ScoreboardBar and StatusBar to indicate what type of game they are used for
- [x] add an option for bar to float to the top of the window
- [ ] ❌(moved to https://github.com/phetsims/vegas/issues/67) scale height of score display background in LevelSelectionButton
- [ ] ❌(moved to https://github.com/phetsims/vegas/issues/67) add optional "{string} title" for a RichText title to LevelSelectionButton
I'll do the work, @jonathanolson will review.
|
1.0
|
status bar change requests - Change requests from 4/12/18 design meeting with @amanda-phet @arouinfar @kathy-phet @jonathanolson @ariel-phet:
- [x] add a 4th score display, "Score: ⭐⭐⭐⭐" (decorate ScoreDisplayDiscreteStars)
- [x] add scoreDisplay option to ScoreboardBar (old bar)
- [x] factor out a common bar that is used by both ScoreboardBar and StatusBar
- [x] possibly rename ScoreboardBar and StatusBar to indicate what type of game they are used for
- [x] add an option for bar to float to the top of the window
- [ ] ❌(moved to https://github.com/phetsims/vegas/issues/67) scale height of score display background in LevelSelectionButton
- [ ] ❌(moved to https://github.com/phetsims/vegas/issues/67) add optional "{string} title" for a RichText title to LevelSelectionButton
I'll do the work, @jonathanolson will review.
|
priority
|
status bar change requests change requests from design meeting with amanda phet arouinfar kathy phet jonathanolson ariel phet add a score display score ⭐⭐⭐⭐ decorate scoredisplaydiscretestars add scoredisplay option to scoreboardbar old bar factor out a common bar that is used by both scoreboardbar and statusbar possibly rename scoreboardbar and statusbar to indicate what type of game they are used for add an option for bar to float to the top of the window ❌ moved to scale height of score display background in levelselectionbutton ❌ moved to add optional string title for a richtext title to levelselectionbutton i ll do the work jonathanolson will review
| 1
|
587,173
| 17,606,319,099
|
IssuesEvent
|
2021-08-17 17:33:33
|
enwikipedia-acc/waca
|
https://api.github.com/repos/enwikipedia-acc/waca
|
closed
|
Database deadlocks
|
actually quite difficult Priority: High
|
We've had several deadlocks in the database.
### Deadlock 1
This one was encountered by me while deferring a request (txn 1); txn 2 is the "related requests from this IP" query.
```
2020-12-30 19:45:16 0x7fdc802a1700
*** (1) TRANSACTION:
TRANSACTION 225198, ACTIVE 0 sec updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 11 lock struct(s), heap size 1128, 19 row lock(s), undo log entries 1
UPDATE `request` SET
status = 'Open',
emailsent = '0',
emailconfirm = 'Confirmed',
reserved = NULL,
updateversion = updateversion + 1
WHERE id = '298620' AND updateversion = '5'
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 25 n bits 712 index acc_pend_status_mailconf of table `production`.`request` lock_mode X locks rec but not gap waiting
*** (2) TRANSACTION:
TRANSACTION 422062531277184, ACTIVE 0 sec fetching rows
mysql tables in use 1, locked 1
26 lock struct(s), heap size 3488, 97 row lock(s)
SELECT /* SearchHelper */ COUNT(*) FROM request origin WHERE 1 = 1 AND (ip LIKE '1.2.3.4' OR forwardedip LIKE '%1.2.3.4%') AND emailconfirm = 'Confirmed' AND ip <> '127.0.0.1' AND email <> 'acc@toolserver.org' AND id <> '297963'
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3848 n bits 120 index PRIMARY of table `production`.`request` trx id 422062531277184 lock mode S locks rec but not gap waiting
*** WE ROLL BACK TRANSACTION (1)
```
### Deadlock 2
We need to check where this is being done, as it feels like something which should be done as a background maintenance job, but it seems to be done at an odd time, and it's being done multiple times?
```
2020-11-29 16:18:52 0x7f4bbc5eb700
*** (1) TRANSACTION:
TRANSACTION 174553, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 252 lock struct(s), heap size 41080, 5608 row lock(s)
DELETE FROM antispoofcache WHERE timestamp < date_sub(now(), INTERVAL 3 HOUR)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174553 lock_mode X waiting
Record lock, heap no 57 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
*** (2) TRANSACTION:
TRANSACTION 174554, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
252 lock struct(s), heap size 41080, 5608 row lock(s)
DELETE FROM antispoofcache WHERE timestamp < date_sub(now(), INTERVAL 3 HOUR)
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174554 lock mode S
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174554 lock_mode X waiting
Record lock, heap no 57 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
```
This also occurred at 2020-11-20 01:42:43, 2020-11-14 03:50:17, and 2020-11-08 03:12:33
### Deadlock 3
This was a spam account request, so not a big deal that we lost the data from this.
```
#0 /srv/production/includes/DataObjects/Request.php(67): PDOStatement->execute()
#1 /srv/production/includes/Pages/Request/PageRequestAccount.php(157): Waca\DataObjects\Request->save()
#2 /srv/production/includes/Pages/Request/PageRequestAccount.php(64): Waca\Pages\Request\PageRequestAccount->saveAsEmailConfirmation(Object(Waca\DataObjects\Request), Object(Waca\DataObjects\Comment))
#3 /srv/production/includes/Tasks/PageBase.php(102): Waca\Pages\Request\PageRequestAccount->main()
#4 /srv/production/includes/Tasks/PageBase.php(361): Waca\Tasks\PageBase->runPage()
#5 /srv/production/includes/Tasks/PublicInterfacePageBase.php(23): Waca\Tasks\PageBase->execute()
#6 /srv/production/includes/WebStart.php(191): Waca\Tasks\PublicInterfacePageBase->execute()
#7 /srv/production/includes/WebStart.php(104): Waca\WebStart->main()
#8 /srv/production/index.php(27): Waca\WebStart->run()
#9 {main}
```
```
2020-12-30 20:03:46 0x7fdc802ec700
*** (1) TRANSACTION:
TRANSACTION 225336, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 165 lock struct(s), heap size 24696, 2169 row lock(s), undo log entries 1
INSERT INTO `request` (
email, ip, name, status, date, emailsent,
emailconfirm, reserved, useragent, forwardedip
) VALUES (
'email@address.example', '172.16.0.164', 'Ronalddot', 'Open', CURRENT_TIMESTAMP(), '0',
'confirmhash', NULL, 'ua', 'ipaddr'
)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225336 lock_mode X locks gap before rec insert intention waiting
Record lock, heap no 533 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
*** (2) TRANSACTION:
TRANSACTION 225337, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
165 lock struct(s), heap size 24696, 2169 row lock(s), undo log entries 1
INSERT INTO `request` (
email, ip, name, status, date, emailsent,
emailconfirm, reserved, useragent, forwardedip
) VALUES (
'email@address.example', '172.16.0.164', 'Greggfen', 'Open', CURRENT_TIMESTAMP(), '0',
'confirmhash', NULL, 'ua', 'ipaddr'
)
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225337 lock mode S
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225337 lock_mode X locks gap before rec insert intention waiting
Record lock, heap no 580 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
*** WE ROLL BACK TRANSACTION (2)
```
|
1.0
|
Database deadlocks - We've had several deadlocks in the database.
### Deadlock 1
This one was encountered by me while deferring a request (txn 1); txn 2 is the "related requests from this IP" query.
```
2020-12-30 19:45:16 0x7fdc802a1700
*** (1) TRANSACTION:
TRANSACTION 225198, ACTIVE 0 sec updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 11 lock struct(s), heap size 1128, 19 row lock(s), undo log entries 1
UPDATE `request` SET
status = 'Open',
emailsent = '0',
emailconfirm = 'Confirmed',
reserved = NULL,
updateversion = updateversion + 1
WHERE id = '298620' AND updateversion = '5'
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 25 n bits 712 index acc_pend_status_mailconf of table `production`.`request` lock_mode X locks rec but not gap waiting
*** (2) TRANSACTION:
TRANSACTION 422062531277184, ACTIVE 0 sec fetching rows
mysql tables in use 1, locked 1
26 lock struct(s), heap size 3488, 97 row lock(s)
SELECT /* SearchHelper */ COUNT(*) FROM request origin WHERE 1 = 1 AND (ip LIKE '1.2.3.4' OR forwardedip LIKE '%1.2.3.4%') AND emailconfirm = 'Confirmed' AND ip <> '127.0.0.1' AND email <> 'acc@toolserver.org' AND id <> '297963'
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3848 n bits 120 index PRIMARY of table `production`.`request` trx id 422062531277184 lock mode S locks rec but not gap waiting
*** WE ROLL BACK TRANSACTION (1)
```
### Deadlock 2
We need to check where this is being done, as it feels like something which should be done as a background maintenance job, but it seems to be done at an odd time, and it's being done multiple times?
```
2020-11-29 16:18:52 0x7f4bbc5eb700
*** (1) TRANSACTION:
TRANSACTION 174553, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 252 lock struct(s), heap size 41080, 5608 row lock(s)
DELETE FROM antispoofcache WHERE timestamp < date_sub(now(), INTERVAL 3 HOUR)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174553 lock_mode X waiting
Record lock, heap no 57 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
*** (2) TRANSACTION:
TRANSACTION 174554, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
252 lock struct(s), heap size 41080, 5608 row lock(s)
DELETE FROM antispoofcache WHERE timestamp < date_sub(now(), INTERVAL 3 HOUR)
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174554 lock mode S
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 530 page no 3 n bits 144 index PRIMARY of table `production`.`antispoofcache` trx id 174554 lock_mode X waiting
Record lock, heap no 57 PHYSICAL RECORD: n_fields 7; compact format; info bits 0
```
This also occurred at 2020-11-20 01:42:43, 2020-11-14 03:50:17, and 2020-11-08 03:12:33
### Deadlock 3
This was a spam account request, so not a big deal that we lost the data from this.
```
#0 /srv/production/includes/DataObjects/Request.php(67): PDOStatement->execute()
#1 /srv/production/includes/Pages/Request/PageRequestAccount.php(157): Waca\DataObjects\Request->save()
#2 /srv/production/includes/Pages/Request/PageRequestAccount.php(64): Waca\Pages\Request\PageRequestAccount->saveAsEmailConfirmation(Object(Waca\DataObjects\Request), Object(Waca\DataObjects\Comment))
#3 /srv/production/includes/Tasks/PageBase.php(102): Waca\Pages\Request\PageRequestAccount->main()
#4 /srv/production/includes/Tasks/PageBase.php(361): Waca\Tasks\PageBase->runPage()
#5 /srv/production/includes/Tasks/PublicInterfacePageBase.php(23): Waca\Tasks\PageBase->execute()
#6 /srv/production/includes/WebStart.php(191): Waca\Tasks\PublicInterfacePageBase->execute()
#7 /srv/production/includes/WebStart.php(104): Waca\WebStart->main()
#8 /srv/production/index.php(27): Waca\WebStart->run()
#9 {main}
```
```
2020-12-30 20:03:46 0x7fdc802ec700
*** (1) TRANSACTION:
TRANSACTION 225336, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 165 lock struct(s), heap size 24696, 2169 row lock(s), undo log entries 1
INSERT INTO `request` (
email, ip, name, status, date, emailsent,
emailconfirm, reserved, useragent, forwardedip
) VALUES (
'email@address.example', '172.16.0.164', 'Ronalddot', 'Open', CURRENT_TIMESTAMP(), '0',
'confirmhash', NULL, 'ua', 'ipaddr'
)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225336 lock_mode X locks gap before rec insert intention waiting
Record lock, heap no 533 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
*** (2) TRANSACTION:
TRANSACTION 225337, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
165 lock struct(s), heap size 24696, 2169 row lock(s), undo log entries 1
INSERT INTO `request` (
email, ip, name, status, date, emailsent,
emailconfirm, reserved, useragent, forwardedip
) VALUES (
'email@address.example', '172.16.0.164', 'Greggfen', 'Open', CURRENT_TIMESTAMP(), '0',
'confirmhash', NULL, 'ua', 'ipaddr'
)
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225337 lock mode S
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 543 page no 3596 n bits 712 index acc_pend_status_mailconf of table `production`.`request` trx id 225337 lock_mode X locks gap before rec insert intention waiting
Record lock, heap no 580 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
*** WE ROLL BACK TRANSACTION (2)
```
|
priority
|
database deadlocks we ve had several deadlocks in the database deadlock this one was encountered by me while deferring a request txn txn is the related requests from this ip query transaction transaction active sec updating or deleting mysql tables in use locked lock wait lock struct s heap size row lock s undo log entries update request set status open emailsent emailconfirm confirmed reserved null updateversion updateversion where id and updateversion waiting for this lock to be granted record locks space id page no n bits index acc pend status mailconf of table production request lock mode x locks rec but not gap waiting transaction transaction active sec fetching rows mysql tables in use locked lock struct s heap size row lock s select searchhelper count from request origin where and ip like or forwardedip like and emailconfirm confirmed and ip and email acc toolserver org and id waiting for this lock to be granted record locks space id page no n bits index primary of table production request trx id lock mode s locks rec but not gap waiting we roll back transaction deadlock we need to check where this is being done as it feels like something which should be done as a background maintenance job but it seems to be done at an odd time and it s being done multiple times transaction transaction active sec starting index read mysql tables in use locked lock wait lock struct s heap size row lock s delete from antispoofcache where timestamp date sub now interval hour waiting for this lock to be granted record locks space id page no n bits index primary of table production antispoofcache trx id lock mode x waiting record lock heap no physical record n fields compact format info bits transaction transaction active sec starting index read mysql tables in use locked lock struct s heap size row lock s delete from antispoofcache where timestamp date sub now interval hour holds the lock s record locks space id page no n bits index primary of table production antispoofcache trx id lock mode s record lock heap no physical record n fields compact format info bits waiting for this lock to be granted record locks space id page no n bits index primary of table production antispoofcache trx id lock mode x waiting record lock heap no physical record n fields compact format info bits this also occurred at and deadlock this was a spam account request so not a big deal that we lost the data from this srv production includes dataobjects request php pdostatement execute srv production includes pages request pagerequestaccount php waca dataobjects request save srv production includes pages request pagerequestaccount php waca pages request pagerequestaccount saveasemailconfirmation object waca dataobjects request object waca dataobjects comment srv production includes tasks pagebase php waca pages request pagerequestaccount main srv production includes tasks pagebase php waca tasks pagebase runpage srv production includes tasks publicinterfacepagebase php waca tasks pagebase execute srv production includes webstart php waca tasks publicinterfacepagebase execute srv production includes webstart php waca webstart main srv production index php waca webstart run main transaction transaction active sec inserting mysql tables in use locked lock wait lock struct s heap size row lock s undo log entries insert into request email ip name status date emailsent emailconfirm reserved useragent forwardedip values email address example ronalddot open current timestamp confirmhash null ua ipaddr waiting for this lock to be granted record locks space id page no n bits index acc pend status mailconf of table production request trx id lock mode x locks gap before rec insert intention waiting record lock heap no physical record n fields compact format info bits transaction transaction active sec inserting mysql tables in use locked lock struct s heap size row lock s undo log entries insert into request email ip name status date emailsent emailconfirm reserved useragent forwardedip values email address example greggfen open current timestamp confirmhash null ua ipaddr holds the lock s record locks space id page no n bits index acc pend status mailconf of table production request trx id lock mode s waiting for this lock to be granted record locks space id page no n bits index acc pend status mailconf of table production request trx id lock mode x locks gap before rec insert intention waiting record lock heap no physical record n fields compact format info bits we roll back transaction
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.