Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
646,684
| 21,056,310,707
|
IssuesEvent
|
2022-04-01 03:56:32
|
oasis-engine/engine
|
https://api.github.com/repos/oasis-engine/engine
|
closed
|
undelete _currentEvents in physics packages
|
bug Physical high priority
|
when two collider collide together, the script delete the component, info still exit in _currentEvents which will cause the undefined. To fix this problem, we should consider two things:
1. clear relevant index in _currentEvents after destroy entity.
2. delete all object after one frame(not in the middle of frame).
|
1.0
|
undelete _currentEvents in physics packages - when two collider collide together, the script delete the component, info still exit in _currentEvents which will cause the undefined. To fix this problem, we should consider two things:
1. clear relevant index in _currentEvents after destroy entity.
2. delete all object after one frame(not in the middle of frame).
|
priority
|
undelete currentevents in physics packages when two collider collide together the script delete the component info still exit in currentevents which will cause the undefined to fix this problem we should consider two things clear relevant index in currentevents after destroy entity delete all object after one frame not in the middle of frame
| 1
|
443,247
| 12,768,979,354
|
IssuesEvent
|
2020-06-30 02:10:18
|
qlcchain/go-qlc
|
https://api.github.com/repos/qlcchain/go-qlc
|
closed
|
enable the RPC module according to the configuration file
|
Priority: High Type: Enhancement
|
- enable the RPC module according to the configuration file
- by default, enable all RPC modules in test network mode
- by default, disable all RPC modules which for enterprise application in main net mode
|
1.0
|
enable the RPC module according to the configuration file - - enable the RPC module according to the configuration file
- by default, enable all RPC modules in test network mode
- by default, disable all RPC modules which for enterprise application in main net mode
|
priority
|
enable the rpc module according to the configuration file enable the rpc module according to the configuration file by default enable all rpc modules in test network mode by default disable all rpc modules which for enterprise application in main net mode
| 1
|
661,920
| 22,095,864,580
|
IssuesEvent
|
2022-06-01 10:03:45
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Project recovery on linux uses wrong checkpoint
|
High Priority Bug ISIS Team: Core
|
**Describe the bug**
Project recovery on IDAaaS always seems to use the previous recovery file
**To Reproduce**
(1) Set project recovery to save every ~5s (File > Settings > General)
(2) Make a workspace
`
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace')
`
(3) Wait for it to save project (see this in log at debug level)
(4) Crash mantid with `Segfault `algorithm with `DryRun=False`
(5) Open workbench again (project recovery should give a pop-up askign to restore workspace but not on IDAaaS)
(6) Make 2 workspaces
```
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace')
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace2')
```
(7) Repeat steps (3-5) - this time I see a project recovery pop-up but it only restores one workspace
**Screenshots**
<!--If applicable/possible, add screenshots to help explain your problem. -->
**Platform/Version (please complete the following information):**
- Mantid nightly 9th May on IDAaaS
**Additional context**
<!--Add any other context about the problem here.-->
|
1.0
|
Project recovery on linux uses wrong checkpoint - **Describe the bug**
Project recovery on IDAaaS always seems to use the previous recovery file
**To Reproduce**
(1) Set project recovery to save every ~5s (File > Settings > General)
(2) Make a workspace
`
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace')
`
(3) Wait for it to save project (see this in log at debug level)
(4) Crash mantid with `Segfault `algorithm with `DryRun=False`
(5) Open workbench again (project recovery should give a pop-up askign to restore workspace but not on IDAaaS)
(6) Make 2 workspaces
```
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace')
CreateWorkspace(DataX=range(12), DataY=range(12), DataE=range(12), NSpec=4, OutputWorkspace='NewWorkspace2')
```
(7) Repeat steps (3-5) - this time I see a project recovery pop-up but it only restores one workspace
**Screenshots**
<!--If applicable/possible, add screenshots to help explain your problem. -->
**Platform/Version (please complete the following information):**
- Mantid nightly 9th May on IDAaaS
**Additional context**
<!--Add any other context about the problem here.-->
|
priority
|
project recovery on linux uses wrong checkpoint describe the bug project recovery on idaaas always seems to use the previous recovery file to reproduce set project recovery to save every file settings general make a workspace createworkspace datax range datay range datae range nspec outputworkspace newworkspace wait for it to save project see this in log at debug level crash mantid with segfault algorithm with dryrun false open workbench again project recovery should give a pop up askign to restore workspace but not on idaaas make workspaces createworkspace datax range datay range datae range nspec outputworkspace newworkspace createworkspace datax range datay range datae range nspec outputworkspace repeat steps this time i see a project recovery pop up but it only restores one workspace screenshots platform version please complete the following information mantid nightly may on idaaas additional context
| 1
|
791,814
| 27,878,473,396
|
IssuesEvent
|
2023-03-21 17:33:21
|
vscentrum/vsc-software-stack
|
https://api.github.com/repos/vscentrum/vsc-software-stack
|
opened
|
funannotate
|
difficulty: easy new priority: high Python site:ugent
|
* link to support ticket: [#2023031360001037](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=113595)
* website: https://funannotate.readthedocs.io
* installation docs: https://funannotate.readthedocs.io/en/latest/install.html
* toolchain: `foss/2021b`
* easyblock to use: `...`
* required dependencies:
* see https://github.com/nextgenusfs/funannotate/blob/master/setup.py
* notes:
* requires Python 3.9?
* effort: *(TBD)*
|
1.0
|
funannotate - * link to support ticket: [#2023031360001037](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=113595)
* website: https://funannotate.readthedocs.io
* installation docs: https://funannotate.readthedocs.io/en/latest/install.html
* toolchain: `foss/2021b`
* easyblock to use: `...`
* required dependencies:
* see https://github.com/nextgenusfs/funannotate/blob/master/setup.py
* notes:
* requires Python 3.9?
* effort: *(TBD)*
|
priority
|
funannotate link to support ticket website installation docs toolchain foss easyblock to use required dependencies see notes requires python effort tbd
| 1
|
119,994
| 4,778,910,555
|
IssuesEvent
|
2016-10-27 20:46:22
|
OneNoteDev/WebClipper
|
https://api.github.com/repos/OneNoteDev/WebClipper
|
closed
|
We will create a blank page if an existing user tries clipping a PDF but does not complete the permissions process
|
bug high-priority
|
New PDF mode scenario:
1. A pre-3.2.9 Clipper user attempts to clip a PDF.
2. We have permission to create a new page for the clip, and so we do.
3. We don't have permission to read and update that page, however.
4. We tell them, "We've added features to the Web Clipper that require new permissions. To accept them, please sign out and sign back in."
5. They decide, "Eh, no, thanks."
Broken experience: a page with just a citation in the user's notebook, created by the Clipper.
Better experience: that page never gets created.
We need a permissions check before creating the new page.
|
1.0
|
We will create a blank page if an existing user tries clipping a PDF but does not complete the permissions process - New PDF mode scenario:
1. A pre-3.2.9 Clipper user attempts to clip a PDF.
2. We have permission to create a new page for the clip, and so we do.
3. We don't have permission to read and update that page, however.
4. We tell them, "We've added features to the Web Clipper that require new permissions. To accept them, please sign out and sign back in."
5. They decide, "Eh, no, thanks."
Broken experience: a page with just a citation in the user's notebook, created by the Clipper.
Better experience: that page never gets created.
We need a permissions check before creating the new page.
|
priority
|
we will create a blank page if an existing user tries clipping a pdf but does not complete the permissions process new pdf mode scenario a pre clipper user attempts to clip a pdf we have permission to create a new page for the clip and so we do we don t have permission to read and update that page however we tell them we ve added features to the web clipper that require new permissions to accept them please sign out and sign back in they decide eh no thanks broken experience a page with just a citation in the user s notebook created by the clipper better experience that page never gets created we need a permissions check before creating the new page
| 1
|
689,035
| 23,604,782,616
|
IssuesEvent
|
2022-08-24 07:17:18
|
1ForeverHD/TopbarPlus
|
https://api.github.com/repos/1ForeverHD/TopbarPlus
|
opened
|
Improve VR compatibility
|
Type: Enhancement Type: Bug Scope: Core Priority: High
|
Ignore VR devices within ``guiService.MenuOpened`` and ``guiService.MenuClosed`` (line 1225 and below of IconController) because their menu button doesn't upon the escape menu, it makes the GUI interactable (therefore we don't want to hide their topbar icons)
i.e.

Credit to @cl1ents (ievnnnnnnnnnnnnnnnnn) for this
|
1.0
|
Improve VR compatibility - Ignore VR devices within ``guiService.MenuOpened`` and ``guiService.MenuClosed`` (line 1225 and below of IconController) because their menu button doesn't upon the escape menu, it makes the GUI interactable (therefore we don't want to hide their topbar icons)
i.e.

Credit to @cl1ents (ievnnnnnnnnnnnnnnnnn) for this
|
priority
|
improve vr compatibility ignore vr devices within guiservice menuopened and guiservice menuclosed line and below of iconcontroller because their menu button doesn t upon the escape menu it makes the gui interactable therefore we don t want to hide their topbar icons i e credit to ievnnnnnnnnnnnnnnnnn for this
| 1
|
251,483
| 8,015,981,908
|
IssuesEvent
|
2018-07-25 11:55:49
|
BEXIS2/Core
|
https://api.github.com/repos/BEXIS2/Core
|
closed
|
A User with read rights can download files.
|
Priority: High Status: Completed Type: Bug
|
**Describe the bug**
a user with read rights can download files. but in the system there is a download right.
|
1.0
|
A User with read rights can download files. - **Describe the bug**
a user with read rights can download files. but in the system there is a download right.
|
priority
|
a user with read rights can download files describe the bug a user with read rights can download files but in the system there is a download right
| 1
|
718,322
| 24,712,101,439
|
IssuesEvent
|
2022-10-20 02:14:44
|
AY2223S1-CS2103T-W08-3/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-W08-3/tp
|
closed
|
As a user, I can find contacts by any field I want
|
priority.High type.Enhancement
|
...so that I can narrow down my search as much as I want and save even more time.
|
1.0
|
As a user, I can find contacts by any field I want - ...so that I can narrow down my search as much as I want and save even more time.
|
priority
|
as a user i can find contacts by any field i want so that i can narrow down my search as much as i want and save even more time
| 1
|
479,475
| 13,797,409,131
|
IssuesEvent
|
2020-10-09 22:09:44
|
aws/aws-app-mesh-roadmap
|
https://api.github.com/repos/aws/aws-app-mesh-roadmap
|
closed
|
Bug: X-Ray trace regression in Envoy image v1.15.0.0-prod
|
Bug Envoy Docker Image Phase: Working on it Priority: High
|
**Summary**
The X-Ray traces emitted by Envoy 1.15 are different than previous releases.
**Steps to Reproduce**
*You can use https://github.com/aws/aws-app-mesh-examples/tree/master/walkthroughs/howto-ecs-basics as a test application*
* Set `ENVOY_IMAGE` to `840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.15.0.0-prod`.
* Follow instructions at least through chapter 3 to set up the mesh.
* Observe that:
* The nodes in the service map are missing the type (segment origin): `AWS::AppMesh::Proxy`.
* The segment names are the full VirtualNode name: `mesh/howto-ecs-basics/virtualNode/howto-ecs-basics-front-node` instead of `howto-ecs-basics/howto-ecs-basics-front-node`.
* The [segment documents](https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html) are missing the `aws` metadata:
```
"aws": {
"app_mesh": {
"mesh_name": "howto-ecs-basics",
"virtual_node_name": "howto-ecs-basics-front-node"
}
}
```

**Are you currently working around this issue?**
Using the older Envoy image: `840364872350.dkr.ecr.<region>.amazonaws.com/aws-appmesh-envoy:v1.12.5.0-prod` does not reproduce this behavior.
|
1.0
|
Bug: X-Ray trace regression in Envoy image v1.15.0.0-prod - **Summary**
The X-Ray traces emitted by Envoy 1.15 are different than previous releases.
**Steps to Reproduce**
*You can use https://github.com/aws/aws-app-mesh-examples/tree/master/walkthroughs/howto-ecs-basics as a test application*
* Set `ENVOY_IMAGE` to `840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.15.0.0-prod`.
* Follow instructions at least through chapter 3 to set up the mesh.
* Observe that:
* The nodes in the service map are missing the type (segment origin): `AWS::AppMesh::Proxy`.
* The segment names are the full VirtualNode name: `mesh/howto-ecs-basics/virtualNode/howto-ecs-basics-front-node` instead of `howto-ecs-basics/howto-ecs-basics-front-node`.
* The [segment documents](https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html) are missing the `aws` metadata:
```
"aws": {
"app_mesh": {
"mesh_name": "howto-ecs-basics",
"virtual_node_name": "howto-ecs-basics-front-node"
}
}
```

**Are you currently working around this issue?**
Using the older Envoy image: `840364872350.dkr.ecr.<region>.amazonaws.com/aws-appmesh-envoy:v1.12.5.0-prod` does not reproduce this behavior.
|
priority
|
bug x ray trace regression in envoy image prod summary the x ray traces emitted by envoy are different than previous releases steps to reproduce you can use as a test application set envoy image to dkr ecr us west amazonaws com aws appmesh envoy prod follow instructions at least through chapter to set up the mesh observe that the nodes in the service map are missing the type segment origin aws appmesh proxy the segment names are the full virtualnode name mesh howto ecs basics virtualnode howto ecs basics front node instead of howto ecs basics howto ecs basics front node the are missing the aws metadata aws app mesh mesh name howto ecs basics virtual node name howto ecs basics front node are you currently working around this issue using the older envoy image dkr ecr amazonaws com aws appmesh envoy prod does not reproduce this behavior
| 1
|
525,224
| 15,241,126,929
|
IssuesEvent
|
2021-02-19 07:56:08
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
opened
|
[0.9.3] Craft station UI can be broken
|
Category: UI Priority: High Type: Bug
|
Step to reproduce:
- place workbench (or any Craft station)
- fly away to a distance when Workbench dissapears but chunk which contains this workbeck will not unload.
- fly back to workbench and open it. I don't have cursor (tab mode) when I open workbench UI and have a lot of exceptions in log file:
```
NullReferenceException: Object reference not set to an instance of an object.
at UnityEngine.Component.GetComponent[T] () [0x00000] in <00000000000000000000000000000000>:0
at UI.CraftingUI.OnShow () [0x00000] in <00000000000000000000000000000000>:0
at UI.WorldObjectUI+PanelData.OnShow () [0x00000] in <00000000000000000000000000000000>:0
at UI.WorldObjectUI.Open (Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at UI.UIManager.Open (System.String guiName, Eco.Shared.Serialization.BSONObject bson, UI.UILayer layer, System.Boolean singleInstance) [0x00000] in <00000000000000000000000000000000>:0
at GenericClient`1[T].OpenUI (System.String uiName, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0
at System.Comparison`1[T].Invoke (T x, T y) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONArray bsonArgs, System.Object& result) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.HandleReceiveRPC (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at NetworkManager.Eco.Shared.Networking.INetworkEventHandler.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.NetObject.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketQueueHandler.TryFetchNextClientUpdate (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketHandler.HandleNetworkEvents (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.PlannerGroup.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.FramePlannerSystem.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystemGroup.UpdateAllSystems () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
Rethrow as TargetInvocationException: Exception has been thrown by the target of an invocation.
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0
at System.Comparison`1[T].Invoke (T x, T y) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONArray bsonArgs, System.Object& result) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.HandleReceiveRPC (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at NetworkManager.Eco.Shared.Networking.INetworkEventHandler.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.NetObject.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketQueueHandler.TryFetchNextClientUpdate (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketHandler.HandleNetworkEvents (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.PlannerGroup.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.FramePlannerSystem.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystemGroup.UpdateAllSystems () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
UnityEngine.Logger:LogException(Exception, Object)
UnityEngine.Debug:LogException(Exception)
FramePlanner.PlannerGroup:OnUpdate()
FramePlanner.FramePlannerSystem:OnUpdate()
Unity.Entities.ComponentSystem:Update()
Unity.Entities.ComponentSystemGroup:UpdateAllSystems()
Unity.Entities.ComponentSystem:Update()
System.Action:Invoke()
```
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/6008441/Player.log)
If you don't have it first time, just try to repeat it.
Video: https://drive.google.com/file/d/1ZglupNjl5F9cYCu745Y_P0G9Jpjqxzrk/view?usp=sharing
|
1.0
|
[0.9.3] Craft station UI can be broken - Step to reproduce:
- place workbench (or any Craft station)
- fly away to a distance when Workbench dissapears but chunk which contains this workbeck will not unload.
- fly back to workbench and open it. I don't have cursor (tab mode) when I open workbench UI and have a lot of exceptions in log file:
```
NullReferenceException: Object reference not set to an instance of an object.
at UnityEngine.Component.GetComponent[T] () [0x00000] in <00000000000000000000000000000000>:0
at UI.CraftingUI.OnShow () [0x00000] in <00000000000000000000000000000000>:0
at UI.WorldObjectUI+PanelData.OnShow () [0x00000] in <00000000000000000000000000000000>:0
at UI.WorldObjectUI.Open (Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at UI.UIManager.Open (System.String guiName, Eco.Shared.Serialization.BSONObject bson, UI.UILayer layer, System.Boolean singleInstance) [0x00000] in <00000000000000000000000000000000>:0
at GenericClient`1[T].OpenUI (System.String uiName, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0
at System.Comparison`1[T].Invoke (T x, T y) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONArray bsonArgs, System.Object& result) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.HandleReceiveRPC (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at NetworkManager.Eco.Shared.Networking.INetworkEventHandler.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.NetObject.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketQueueHandler.TryFetchNextClientUpdate (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketHandler.HandleNetworkEvents (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.PlannerGroup.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.FramePlannerSystem.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystemGroup.UpdateAllSystems () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
Rethrow as TargetInvocationException: Exception has been thrown by the target of an invocation.
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <00000000000000000000000000000000>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <00000000000000000000000000000000>:0
at System.Comparison`1[T].Invoke (T x, T y) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONArray bsonArgs, System.Object& result) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.RPCManager.HandleReceiveRPC (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at NetworkManager.Eco.Shared.Networking.INetworkEventHandler.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <00000000000000000000000000000000>:0
at Eco.Shared.Networking.NetObject.ReceiveEvent (Eco.Shared.Networking.INetClient client, Eco.Shared.Networking.NetworkEvent netEvent, Eco.Shared.Serialization.BSONObject bsonObj) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketQueueHandler.TryFetchNextClientUpdate (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at ClientPacketHandler.HandleNetworkEvents (Eco.Shared.Time.TimeLimit timeLimit) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.PlannerGroup.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at FramePlanner.FramePlannerSystem.OnUpdate () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystemGroup.UpdateAllSystems () [0x00000] in <00000000000000000000000000000000>:0
at Unity.Entities.ComponentSystem.Update () [0x00000] in <00000000000000000000000000000000>:0
at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0
UnityEngine.Logger:LogException(Exception, Object)
UnityEngine.Debug:LogException(Exception)
FramePlanner.PlannerGroup:OnUpdate()
FramePlanner.FramePlannerSystem:OnUpdate()
Unity.Entities.ComponentSystem:Update()
Unity.Entities.ComponentSystemGroup:UpdateAllSystems()
Unity.Entities.ComponentSystem:Update()
System.Action:Invoke()
```
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/6008441/Player.log)
If you don't have it first time, just try to repeat it.
Video: https://drive.google.com/file/d/1ZglupNjl5F9cYCu745Y_P0G9Jpjqxzrk/view?usp=sharing
|
priority
|
craft station ui can be broken step to reproduce place workbench or any craft station fly away to a distance when workbench dissapears but chunk which contains this workbeck will not unload fly back to workbench and open it i don t have cursor tab mode when i open workbench ui and have a lot of exceptions in log file nullreferenceexception object reference not set to an instance of an object at unityengine component getcomponent in at ui craftingui onshow in at ui worldobjectui paneldata onshow in at ui worldobjectui open eco shared serialization bsonobject bsonobj in at ui uimanager open system string guiname eco shared serialization bsonobject bson ui uilayer layer system boolean singleinstance in at genericclient openui system string uiname eco shared serialization bsonobject bsonobj in at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in at system reflection methodbase invoke system object obj system object parameters in at system comparison invoke t x t y in at eco shared networking rpcmanager tryinvoke eco shared networking inetclient client system object target system string methodname eco shared serialization bsonarray bsonargs system object result in at eco shared networking rpcmanager invokeon eco shared networking inetclient client eco shared serialization bsonobject bson system object target system string methodname in at eco shared networking rpcmanager handlereceiverpc eco shared networking inetclient client eco shared serialization bsonobject bson in at networkmanager eco shared networking inetworkeventhandler receiveevent eco shared networking inetclient client eco shared networking networkevent netevent eco shared serialization bsonobject bson in at eco shared networking netobject receiveevent eco shared networking inetclient client eco shared networking networkevent netevent eco shared serialization bsonobject bsonobj in at clientpacketqueuehandler tryfetchnextclientupdate eco shared time timelimit timelimit in at clientpackethandler handlenetworkevents eco shared time timelimit timelimit in at system action invoke t obj in at frameplanner plannergroup onupdate in at frameplanner frameplannersystem onupdate in at unity entities componentsystem update in at unity entities componentsystemgroup updateallsystems in at unity entities componentsystem update in at system action invoke in rethrow as targetinvocationexception exception has been thrown by the target of an invocation at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in at system reflection methodbase invoke system object obj system object parameters in at system comparison invoke t x t y in at eco shared networking rpcmanager tryinvoke eco shared networking inetclient client system object target system string methodname eco shared serialization bsonarray bsonargs system object result in at eco shared networking rpcmanager invokeon eco shared networking inetclient client eco shared serialization bsonobject bson system object target system string methodname in at eco shared networking rpcmanager handlereceiverpc eco shared networking inetclient client eco shared serialization bsonobject bson in at networkmanager eco shared networking inetworkeventhandler receiveevent eco shared networking inetclient client eco shared networking networkevent netevent eco shared serialization bsonobject bson in at eco shared networking netobject receiveevent eco shared networking inetclient client eco shared networking networkevent netevent eco shared serialization bsonobject bsonobj in at clientpacketqueuehandler tryfetchnextclientupdate eco shared time timelimit timelimit in at clientpackethandler handlenetworkevents eco shared time timelimit timelimit in at system action invoke t obj in at frameplanner plannergroup onupdate in at frameplanner frameplannersystem onupdate in at unity entities componentsystem update in at unity entities componentsystemgroup updateallsystems in at unity entities componentsystem update in at system action invoke in unityengine logger logexception exception object unityengine debug logexception exception frameplanner plannergroup onupdate frameplanner frameplannersystem onupdate unity entities componentsystem update unity entities componentsystemgroup updateallsystems unity entities componentsystem update system action invoke if you don t have it first time just try to repeat it video
| 1
|
387,447
| 11,461,542,778
|
IssuesEvent
|
2020-02-07 12:11:48
|
robotframework/robotframework
|
https://api.github.com/repos/robotframework/robotframework
|
opened
|
Remove Python 2 and Python 3.5 support
|
backwards incompatible enhancement priority: high
|
Python 2 [will be officially retired in April, 2020](https://www.python.org/psf/press-release/pr20191220/). Robot Framework continuing its support much long does not make sense, and the [Robot Framework Foundation](https://robotframework.org/foundation/) has decided that it will not sponsor Robot Framework development targeting Python 2 anymore in 2021. That means the following:
- Robot Framework 4.0 will not support Python 2 anymore. Its development will most likely start sometime in H2/2020 and the final release is expected for H1/2021.
- Robot Framework 3.2 (currently in beta) and all its minor releases will support Python 2.7. If there would be Robot Framework 3.3, also it would support Python 2.7.
- When Python 2 support is removed, also support for Python 3.5 and older will be removed. This eases development by making it possible to take into use newer Python features, most notably f-strings. Python 3.5 will also [reach its end-of-life in H2/2020](https://devguide.python.org/#status-of-python-branches), a lot before the expected Robot Framework 4.0 final release.
|
1.0
|
Remove Python 2 and Python 3.5 support - Python 2 [will be officially retired in April, 2020](https://www.python.org/psf/press-release/pr20191220/). Robot Framework continuing its support much long does not make sense, and the [Robot Framework Foundation](https://robotframework.org/foundation/) has decided that it will not sponsor Robot Framework development targeting Python 2 anymore in 2021. That means the following:
- Robot Framework 4.0 will not support Python 2 anymore. Its development will most likely start sometime in H2/2020 and the final release is expected for H1/2021.
- Robot Framework 3.2 (currently in beta) and all its minor releases will support Python 2.7. If there would be Robot Framework 3.3, also it would support Python 2.7.
- When Python 2 support is removed, also support for Python 3.5 and older will be removed. This eases development by making it possible to take into use newer Python features, most notably f-strings. Python 3.5 will also [reach its end-of-life in H2/2020](https://devguide.python.org/#status-of-python-branches), a lot before the expected Robot Framework 4.0 final release.
|
priority
|
remove python and python support python robot framework continuing its support much long does not make sense and the has decided that it will not sponsor robot framework development targeting python anymore in that means the following robot framework will not support python anymore its development will most likely start sometime in and the final release is expected for robot framework currently in beta and all its minor releases will support python if there would be robot framework also it would support python when python support is removed also support for python and older will be removed this eases development by making it possible to take into use newer python features most notably f strings python will also a lot before the expected robot framework final release
| 1
|
755,512
| 26,431,045,311
|
IssuesEvent
|
2023-01-14 20:28:44
|
bats-core/bats-core
|
https://api.github.com/repos/bats-core/bats-core
|
closed
|
Include all support libraries in the Docker image
|
Type: Enhancement Priority: High Component: Docker Component: Packaging Size: Large
|
**Is your feature request related to a problem? Please describe.**
I am trying to setup a Docker-based validation process using `bats`. Our tests make use of `bats-assert` and `bast-support`, and those are not included in the `bats/bats` Docker image.
**Describe the solution you'd like**
Include all optional libraries in the Docker image. Alternatively, provide an additional image with those included, if adding more libraries to the base one is not desirable.
**Describe alternatives you've considered**
We considered building our own image, but having all libraries in the official Docker image is cleaner and has less potential of becoming stale.
**Additional context**
Thanks for an awesome tool!
|
1.0
|
Include all support libraries in the Docker image - **Is your feature request related to a problem? Please describe.**
I am trying to setup a Docker-based validation process using `bats`. Our tests make use of `bats-assert` and `bast-support`, and those are not included in the `bats/bats` Docker image.
**Describe the solution you'd like**
Include all optional libraries in the Docker image. Alternatively, provide an additional image with those included, if adding more libraries to the base one is not desirable.
**Describe alternatives you've considered**
We considered building our own image, but having all libraries in the official Docker image is cleaner and has less potential of becoming stale.
**Additional context**
Thanks for an awesome tool!
|
priority
|
include all support libraries in the docker image is your feature request related to a problem please describe i am trying to setup a docker based validation process using bats our tests make use of bats assert and bast support and those are not included in the bats bats docker image describe the solution you d like include all optional libraries in the docker image alternatively provide an additional image with those included if adding more libraries to the base one is not desirable describe alternatives you ve considered we considered building our own image but having all libraries in the official docker image is cleaner and has less potential of becoming stale additional context thanks for an awesome tool
| 1
|
610,025
| 18,892,534,736
|
IssuesEvent
|
2021-11-15 14:41:18
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] [studio-ui] Studio search breaks when updating the search term while in a > 1 page
|
bug priority: high CI
|
### Bug Report
#### Crafter CMS Version
3.1.15 and latest 3.1.18 build
#### Date of Build
11/08/2021
#### Describe the bug
Studio search breaks when updating the search term while in a > 1 page: no results are shown and in some instances the page becomes unresponsive.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial
2. Click on the magnifying glass to go to Studio search
3. Pick the last page in the results
4. Enter diet in the search terms
You'll notice that the UI says that there are 3 search results but none is displayed. In some client repos the page becomes unresponsive too.
#### Logs
N/A
#### Screenshots

|
1.0
|
[studio] [studio-ui] Studio search breaks when updating the search term while in a > 1 page - ### Bug Report
#### Crafter CMS Version
3.1.15 and latest 3.1.18 build
#### Date of Build
11/08/2021
#### Describe the bug
Studio search breaks when updating the search term while in a > 1 page: no results are shown and in some instances the page becomes unresponsive.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial
2. Click on the magnifying glass to go to Studio search
3. Pick the last page in the results
4. Enter diet in the search terms
You'll notice that the UI says that there are 3 search results but none is displayed. In some client repos the page becomes unresponsive too.
#### Logs
N/A
#### Screenshots

|
priority
|
studio search breaks when updating the search term while in a page bug report crafter cms version and latest build date of build describe the bug studio search breaks when updating the search term while in a page no results are shown and in some instances the page becomes unresponsive to reproduce steps to reproduce the behavior create a site based on editorial click on the magnifying glass to go to studio search pick the last page in the results enter diet in the search terms you ll notice that the ui says that there are search results but none is displayed in some client repos the page becomes unresponsive too logs n a screenshots
| 1
|
321,464
| 9,798,887,873
|
IssuesEvent
|
2019-06-11 13:22:26
|
EricssonResearch/scott-eu
|
https://api.github.com/repos/EricssonResearch/scott-eu
|
closed
|
Figure out the extensibility mechanism for the Gateway Backend
|
Comp: Gateway Priority: High Status: Review Needed Type: Feature Xtra: Fix Verified
|
I initially thought of OSGi.
Leo has suggested loading a JAR at runtime: https://stackoverflow.com/questions/60764/how-should-i-load-jars-dynamically-at-runtime But this approach would require to scan the whole JAR, load its each of its classes etc: https://stackoverflow.com/questions/45166757/loading-classes-and-resources-in-java-9 and https://stackoverflow.com/questions/41932635/scanning-classpath-modulepath-in-runtime-in-java-9
Then I have recalled of https://docs.oracle.com/javase/9/docs/api/java/util/ServiceLoader.html class and found the guide https://docs.oracle.com/javase/tutorial/ext/basics/spi.html#introduction. The guide only requires to put a JAR on the classpath at the application startup. Why not? We can put the JAR into a Docker volume that gets copied to the `/lib/ext` and Jetty will put it on the classpath at the startup: https://www.eclipse.org/jetty/documentation/9.4.x/startup-classpath.html
|
1.0
|
Figure out the extensibility mechanism for the Gateway Backend - I initially thought of OSGi.
Leo has suggested loading a JAR at runtime: https://stackoverflow.com/questions/60764/how-should-i-load-jars-dynamically-at-runtime But this approach would require to scan the whole JAR, load its each of its classes etc: https://stackoverflow.com/questions/45166757/loading-classes-and-resources-in-java-9 and https://stackoverflow.com/questions/41932635/scanning-classpath-modulepath-in-runtime-in-java-9
Then I have recalled of https://docs.oracle.com/javase/9/docs/api/java/util/ServiceLoader.html class and found the guide https://docs.oracle.com/javase/tutorial/ext/basics/spi.html#introduction. The guide only requires to put a JAR on the classpath at the application startup. Why not? We can put the JAR into a Docker volume that gets copied to the `/lib/ext` and Jetty will put it on the classpath at the startup: https://www.eclipse.org/jetty/documentation/9.4.x/startup-classpath.html
|
priority
|
figure out the extensibility mechanism for the gateway backend i initially thought of osgi leo has suggested loading a jar at runtime but this approach would require to scan the whole jar load its each of its classes etc and then i have recalled of class and found the guide the guide only requires to put a jar on the classpath at the application startup why not we can put the jar into a docker volume that gets copied to the lib ext and jetty will put it on the classpath at the startup
| 1
|
104,088
| 4,195,052,517
|
IssuesEvent
|
2016-06-25 13:38:53
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
reopened
|
sct_testing fails on neurodebian
|
bug priority: high
|
~~~
Spinal Cord Toolbox (version dev-484e5905c7fa148da396e8ccc58414ca1802df0e)
Running /home/brain/sct/scripts/sct_testing.py -d 1
Downloading testing data...
sct_download_data -d sct_testing_data
Path to testing data: /home/brain/sct_testing_data/data/
Checking test_sct_apply_transfo.....................[OK]
Checking test_sct_check_atlas_integrity.............[OK]
Checking test_sct_compute_mtr.......................[OK]
Checking test_sct_concat_transfo....................[OK]
Checking test_sct_convert...........................[OK]
Checking test_sct_create_mask.......................[OK]
Checking test_sct_crop_image........................[OK]
Checking test_sct_dmri_compute_dti..................[OK]
Checking test_sct_dmri_get_bvalue...................[OK]
Checking test_sct_dmri_transpose_bvecs..............[OK]
Checking test_sct_dmri_moco.........................[OK]
Checking test_sct_dmri_separate_b0_and_dwi..........[OK]
Checking test_sct_extract_metric....................[OK]
Checking test_sct_fmri_compute_tsnr.................[OK]
Checking test_sct_fmri_moco.........................[OK]
Checking test_sct_image.............................[OK]
Checking test_sct_label_utils.......................[OK]
Checking test_sct_label_vertebrae...................Running /home/brain/sct/scripts/sct_label_vertebrae.py -laplacian 0 -o t2_seg_labeled.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 1 -denoise 0 -ofolder sct_label_vertebrae_data_160625093012_560151/ -initz 34,3
Check folder existence...
Create temporary folder...
Create temporary folder...
mkdir tmp.160625093012_893250/
Copying input data to tmp folder...
sct_convert -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -o tmp.160625093012_893250/data.nii
sct_convert -i /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -o tmp.160625093012_893250/segmentation.nii.gz
Create label to identify disc...
Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.
/home/brain/sct/scripts/sct_utils.py, line 104
[FAIL]
====================================================================================================
sct_label_vertebrae -laplacian 0 -o t2_seg_labeled.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 1 -denoise 0 -ofolder sct_label_vertebrae_data_160625093012_560151/ -initz 34,3
====================================================================================================
ERROR: Function crashed!
Checking test_sct_maths.............................[OK]
Checking test_sct_process_segmentation..............[OK]
Checking test_sct_propseg...........................[OK]
Checking test_sct_register_graymatter...............[OK]
Checking test_sct_register_multimodal...............[OK]
Checking test_sct_register_to_template..............Running /home/brain/sct/scripts/sct_register_to_template.py -c t2 -l /home/brain/sct_testing_data/data/t2/labels.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -t /home/brain/sct_testing_data/data/template/ -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 0 -param step=1,type=seg,algo=slicereg,metric=MeanSquares,iter=5:step=2,type=seg,algo=bsplinesyn,iter=3,metric=MI:step=3,iter=0 -ofolder sct_register_to_template_data_160625093041_76113/
Check folder existence...
Check folder existence...
Check template files...
OK: /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_T2.nii.gz
OK: /home/brain/sct_testing_data/data/template/template/landmarks_center.nii.gz
OK: /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_cord.nii.gz
Check parameters:
.. Data: /home/brain/sct_testing_data/data/t2/t2.nii.gz
.. Landmarks: /home/brain/sct_testing_data/data/t2/labels.nii.gz
.. Segmentation: /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz
.. Path template: /home/brain/sct_testing_data/data/template/template
.. Path output: sct_register_to_template_data_160625093041_76113/
.. Output type: 1
.. Remove temp files: 0
Parameters for registration:
Step #1
.. Type #seg
.. Algorithm................ slicereg
.. Metric................... MeanSquares
.. Number of iterations..... 5
.. Shrink factor............ 1
.. Smoothing factor......... 5
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Step #2
.. Type #seg
.. Algorithm................ bsplinesyn
.. Metric................... MI
.. Number of iterations..... 3
.. Shrink factor............ 1
.. Smoothing factor......... 1
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Step #3
.. Type #im
.. Algorithm................ syn
.. Metric................... CC
.. Number of iterations..... 0
.. Shrink factor............ 1
.. Smoothing factor......... 0
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Check if data, segmentation and landmarks are in the same space...
Check input labels...
Create temporary folder...
mkdir tmp.160625093041_218218/
Copying input data to tmp folder and convert to nii...
sct_convert -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -o tmp.160625093041_218218/data.nii
sct_convert -i /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -o tmp.160625093041_218218/seg.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/t2/labels.nii.gz -o tmp.160625093041_218218/label.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_T2.nii.gz -o tmp.160625093041_218218/template.nii
sct_convert -i /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_cord.nii.gz -o tmp.160625093041_218218/template_seg.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/template/template/landmarks_center.nii.gz -o tmp.160625093041_218218/template_label.nii.gz
Smooth segmentation...
sct_maths -i seg.nii.gz -smooth 0 -o seg_smooth.nii.gz
Resample data to 1mm isotropic...
sct_resample -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
sct_resample -i seg_smooth.nii.gz -mm 1.0x1.0x1.0 -x linear -o seg_smooth_1mm.nii.gz
Position=(31,44,26) -- Value= 3
Position=(32,9,26) -- Value= 5
Useful notation:
31,44,26,3:32,9,26,5
sct_label_utils -i data_1mm.nii -create 31,44,26,3:32,9,26,5 -v 1 -o label_1mm.nii.gz
Change orientation of input images to RPI...
sct_image -i data_1mm.nii -setorient RPI -o data_1mm_rpi.nii
sct_image -i seg_smooth_1mm.nii.gz -setorient RPI -o seg_smooth_1mm_rpi.nii.gz
sct_image -i label_1mm.nii.gz -setorient RPI -o label_1mm_rpi.nii.gz
sct_crop_image -i seg_smooth_1mm_rpi.nii.gz -o seg_smooth_1mm_rpi_crop.nii.gz -dim 2 -bzmax
Straighten the spinal cord using centerline/segmentation...
sct_straighten_spinalcord -i seg_smooth_1mm_rpi_crop.nii.gz -s seg_smooth_1mm_rpi_crop.nii.gz -o seg_smooth_1mm_rpi_crop_straight.nii.gz -qc 0 -r 0 -v 1
sct_concat_transfo -w warp_straight2curve.nii.gz -d data_1mm_rpi.nii -o warp_straight2curve.nii.gz
Remove unused label on template. Keep only label present in the input label image...
sct_label_utils -i template_label.nii.gz -o template_label.nii.gz -remove label_1mm_rpi.nii.gz
Dilating input labels using 3vox ball radius
sct_maths -i label_1mm_rpi.nii.gz -o label_1mm_rpi_dilate.nii.gz -dilate 3
Running /home/brain/sct/scripts/sct_maths.py -i label_1mm_rpi.nii.gz -o label_1mm_rpi_dilate.nii.gz -dilate 3
Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.
/home/brain/sct/scripts/sct_utils.py, line 104
/home/brain/sct/scripts/sct_utils.py, line 104
[FAIL]
====================================================================================================
sct_register_to_template -c t2 -l /home/brain/sct_testing_data/data/t2/labels.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -t /home/brain/sct_testing_data/data/template/ -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 0 -param step=1,type=seg,algo=slicereg,metric=MeanSquares,iter=5:step=2,type=seg,algo=bsplinesyn,iter=3,metric=MI:step=3,iter=0 -ofolder sct_register_to_template_data_160625093041_76113/
====================================================================================================
ERROR: Function crashed!
Checking test_sct_resample..........................[OK]
Checking test_sct_segment_graymatter................[OK]
Checking test_sct_smooth_spinalcord.................[OK]
Checking test_sct_straighten_spinalcord.............[OK]
Checking test_sct_warp_template.....................[OK]
Checking test_sct_documentation.....................[OK]
Checking test_sct_dmri_create_noisemask.............[OK]
status: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
Finished! Elapsed time: 141s
~~~
|
1.0
|
sct_testing fails on neurodebian - ~~~
Spinal Cord Toolbox (version dev-484e5905c7fa148da396e8ccc58414ca1802df0e)
Running /home/brain/sct/scripts/sct_testing.py -d 1
Downloading testing data...
sct_download_data -d sct_testing_data
Path to testing data: /home/brain/sct_testing_data/data/
Checking test_sct_apply_transfo.....................[OK]
Checking test_sct_check_atlas_integrity.............[OK]
Checking test_sct_compute_mtr.......................[OK]
Checking test_sct_concat_transfo....................[OK]
Checking test_sct_convert...........................[OK]
Checking test_sct_create_mask.......................[OK]
Checking test_sct_crop_image........................[OK]
Checking test_sct_dmri_compute_dti..................[OK]
Checking test_sct_dmri_get_bvalue...................[OK]
Checking test_sct_dmri_transpose_bvecs..............[OK]
Checking test_sct_dmri_moco.........................[OK]
Checking test_sct_dmri_separate_b0_and_dwi..........[OK]
Checking test_sct_extract_metric....................[OK]
Checking test_sct_fmri_compute_tsnr.................[OK]
Checking test_sct_fmri_moco.........................[OK]
Checking test_sct_image.............................[OK]
Checking test_sct_label_utils.......................[OK]
Checking test_sct_label_vertebrae...................Running /home/brain/sct/scripts/sct_label_vertebrae.py -laplacian 0 -o t2_seg_labeled.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 1 -denoise 0 -ofolder sct_label_vertebrae_data_160625093012_560151/ -initz 34,3
Check folder existence...
Create temporary folder...
Create temporary folder...
mkdir tmp.160625093012_893250/
Copying input data to tmp folder...
sct_convert -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -o tmp.160625093012_893250/data.nii
sct_convert -i /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -o tmp.160625093012_893250/segmentation.nii.gz
Create label to identify disc...
Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.
/home/brain/sct/scripts/sct_utils.py, line 104
[FAIL]
====================================================================================================
sct_label_vertebrae -laplacian 0 -o t2_seg_labeled.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 1 -denoise 0 -ofolder sct_label_vertebrae_data_160625093012_560151/ -initz 34,3
====================================================================================================
ERROR: Function crashed!
Checking test_sct_maths.............................[OK]
Checking test_sct_process_segmentation..............[OK]
Checking test_sct_propseg...........................[OK]
Checking test_sct_register_graymatter...............[OK]
Checking test_sct_register_multimodal...............[OK]
Checking test_sct_register_to_template..............Running /home/brain/sct/scripts/sct_register_to_template.py -c t2 -l /home/brain/sct_testing_data/data/t2/labels.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -t /home/brain/sct_testing_data/data/template/ -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 0 -param step=1,type=seg,algo=slicereg,metric=MeanSquares,iter=5:step=2,type=seg,algo=bsplinesyn,iter=3,metric=MI:step=3,iter=0 -ofolder sct_register_to_template_data_160625093041_76113/
Check folder existence...
Check folder existence...
Check template files...
OK: /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_T2.nii.gz
OK: /home/brain/sct_testing_data/data/template/template/landmarks_center.nii.gz
OK: /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_cord.nii.gz
Check parameters:
.. Data: /home/brain/sct_testing_data/data/t2/t2.nii.gz
.. Landmarks: /home/brain/sct_testing_data/data/t2/labels.nii.gz
.. Segmentation: /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz
.. Path template: /home/brain/sct_testing_data/data/template/template
.. Path output: sct_register_to_template_data_160625093041_76113/
.. Output type: 1
.. Remove temp files: 0
Parameters for registration:
Step #1
.. Type #seg
.. Algorithm................ slicereg
.. Metric................... MeanSquares
.. Number of iterations..... 5
.. Shrink factor............ 1
.. Smoothing factor......... 5
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Step #2
.. Type #seg
.. Algorithm................ bsplinesyn
.. Metric................... MI
.. Number of iterations..... 3
.. Shrink factor............ 1
.. Smoothing factor......... 1
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Step #3
.. Type #im
.. Algorithm................ syn
.. Metric................... CC
.. Number of iterations..... 0
.. Shrink factor............ 1
.. Smoothing factor......... 0
.. Gradient step............ 0.5
.. Degree of polynomial..... 3
Check if data, segmentation and landmarks are in the same space...
Check input labels...
Create temporary folder...
mkdir tmp.160625093041_218218/
Copying input data to tmp folder and convert to nii...
sct_convert -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -o tmp.160625093041_218218/data.nii
sct_convert -i /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -o tmp.160625093041_218218/seg.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/t2/labels.nii.gz -o tmp.160625093041_218218/label.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_T2.nii.gz -o tmp.160625093041_218218/template.nii
sct_convert -i /home/brain/sct_testing_data/data/template/template/MNI-Poly-AMU_cord.nii.gz -o tmp.160625093041_218218/template_seg.nii.gz
sct_convert -i /home/brain/sct_testing_data/data/template/template/landmarks_center.nii.gz -o tmp.160625093041_218218/template_label.nii.gz
Smooth segmentation...
sct_maths -i seg.nii.gz -smooth 0 -o seg_smooth.nii.gz
Resample data to 1mm isotropic...
sct_resample -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
sct_resample -i seg_smooth.nii.gz -mm 1.0x1.0x1.0 -x linear -o seg_smooth_1mm.nii.gz
Position=(31,44,26) -- Value= 3
Position=(32,9,26) -- Value= 5
Useful notation:
31,44,26,3:32,9,26,5
sct_label_utils -i data_1mm.nii -create 31,44,26,3:32,9,26,5 -v 1 -o label_1mm.nii.gz
Change orientation of input images to RPI...
sct_image -i data_1mm.nii -setorient RPI -o data_1mm_rpi.nii
sct_image -i seg_smooth_1mm.nii.gz -setorient RPI -o seg_smooth_1mm_rpi.nii.gz
sct_image -i label_1mm.nii.gz -setorient RPI -o label_1mm_rpi.nii.gz
sct_crop_image -i seg_smooth_1mm_rpi.nii.gz -o seg_smooth_1mm_rpi_crop.nii.gz -dim 2 -bzmax
Straighten the spinal cord using centerline/segmentation...
sct_straighten_spinalcord -i seg_smooth_1mm_rpi_crop.nii.gz -s seg_smooth_1mm_rpi_crop.nii.gz -o seg_smooth_1mm_rpi_crop_straight.nii.gz -qc 0 -r 0 -v 1
sct_concat_transfo -w warp_straight2curve.nii.gz -d data_1mm_rpi.nii -o warp_straight2curve.nii.gz
Remove unused label on template. Keep only label present in the input label image...
sct_label_utils -i template_label.nii.gz -o template_label.nii.gz -remove label_1mm_rpi.nii.gz
Dilating input labels using 3vox ball radius
sct_maths -i label_1mm_rpi.nii.gz -o label_1mm_rpi_dilate.nii.gz -dilate 3
Running /home/brain/sct/scripts/sct_maths.py -i label_1mm_rpi.nii.gz -o label_1mm_rpi_dilate.nii.gz -dilate 3
Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.
/home/brain/sct/scripts/sct_utils.py, line 104
/home/brain/sct/scripts/sct_utils.py, line 104
[FAIL]
====================================================================================================
sct_register_to_template -c t2 -l /home/brain/sct_testing_data/data/t2/labels.nii.gz -i /home/brain/sct_testing_data/data/t2/t2.nii.gz -t /home/brain/sct_testing_data/data/template/ -v 1 -s /home/brain/sct_testing_data/data/t2/t2_seg.nii.gz -r 0 -param step=1,type=seg,algo=slicereg,metric=MeanSquares,iter=5:step=2,type=seg,algo=bsplinesyn,iter=3,metric=MI:step=3,iter=0 -ofolder sct_register_to_template_data_160625093041_76113/
====================================================================================================
ERROR: Function crashed!
Checking test_sct_resample..........................[OK]
Checking test_sct_segment_graymatter................[OK]
Checking test_sct_smooth_spinalcord.................[OK]
Checking test_sct_straighten_spinalcord.............[OK]
Checking test_sct_warp_template.....................[OK]
Checking test_sct_documentation.....................[OK]
Checking test_sct_dmri_create_noisemask.............[OK]
status: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
Finished! Elapsed time: 141s
~~~
|
priority
|
sct testing fails on neurodebian spinal cord toolbox version dev running home brain sct scripts sct testing py d downloading testing data sct download data d sct testing data path to testing data home brain sct testing data data checking test sct apply transfo checking test sct check atlas integrity checking test sct compute mtr checking test sct concat transfo checking test sct convert checking test sct create mask checking test sct crop image checking test sct dmri compute dti checking test sct dmri get bvalue checking test sct dmri transpose bvecs checking test sct dmri moco checking test sct dmri separate and dwi checking test sct extract metric checking test sct fmri compute tsnr checking test sct fmri moco checking test sct image checking test sct label utils checking test sct label vertebrae running home brain sct scripts sct label vertebrae py laplacian o seg labeled nii gz i home brain sct testing data data nii gz v s home brain sct testing data data seg nii gz r denoise ofolder sct label vertebrae data initz check folder existence create temporary folder create temporary folder mkdir tmp copying input data to tmp folder sct convert i home brain sct testing data data nii gz o tmp data nii sct convert i home brain sct testing data data seg nii gz o tmp segmentation nii gz create label to identify disc intel mkl fatal error cannot load libmkl avx so or libmkl def so home brain sct scripts sct utils py line sct label vertebrae laplacian o seg labeled nii gz i home brain sct testing data data nii gz v s home brain sct testing data data seg nii gz r denoise ofolder sct label vertebrae data initz error function crashed checking test sct maths checking test sct process segmentation checking test sct propseg checking test sct register graymatter checking test sct register multimodal checking test sct register to template running home brain sct scripts sct register to template py c l home brain sct testing data data labels nii gz i home brain sct testing data data nii gz t home brain sct testing data data template v s home brain sct testing data data seg nii gz r param step type seg algo slicereg metric meansquares iter step type seg algo bsplinesyn iter metric mi step iter ofolder sct register to template data check folder existence check folder existence check template files ok home brain sct testing data data template template mni poly amu nii gz ok home brain sct testing data data template template landmarks center nii gz ok home brain sct testing data data template template mni poly amu cord nii gz check parameters data home brain sct testing data data nii gz landmarks home brain sct testing data data labels nii gz segmentation home brain sct testing data data seg nii gz path template home brain sct testing data data template template path output sct register to template data output type remove temp files parameters for registration step type seg algorithm slicereg metric meansquares number of iterations shrink factor smoothing factor gradient step degree of polynomial step type seg algorithm bsplinesyn metric mi number of iterations shrink factor smoothing factor gradient step degree of polynomial step type im algorithm syn metric cc number of iterations shrink factor smoothing factor gradient step degree of polynomial check if data segmentation and landmarks are in the same space check input labels create temporary folder mkdir tmp copying input data to tmp folder and convert to nii sct convert i home brain sct testing data data nii gz o tmp data nii sct convert i home brain sct testing data data seg nii gz o tmp seg nii gz sct convert i home brain sct testing data data labels nii gz o tmp label nii gz sct convert i home brain sct testing data data template template mni poly amu nii gz o tmp template nii sct convert i home brain sct testing data data template template mni poly amu cord nii gz o tmp template seg nii gz sct convert i home brain sct testing data data template template landmarks center nii gz o tmp template label nii gz smooth segmentation sct maths i seg nii gz smooth o seg smooth nii gz resample data to isotropic sct resample i data nii mm x linear o data nii sct resample i seg smooth nii gz mm x linear o seg smooth nii gz position value position value useful notation sct label utils i data nii create v o label nii gz change orientation of input images to rpi sct image i data nii setorient rpi o data rpi nii sct image i seg smooth nii gz setorient rpi o seg smooth rpi nii gz sct image i label nii gz setorient rpi o label rpi nii gz sct crop image i seg smooth rpi nii gz o seg smooth rpi crop nii gz dim bzmax straighten the spinal cord using centerline segmentation sct straighten spinalcord i seg smooth rpi crop nii gz s seg smooth rpi crop nii gz o seg smooth rpi crop straight nii gz qc r v sct concat transfo w warp nii gz d data rpi nii o warp nii gz remove unused label on template keep only label present in the input label image sct label utils i template label nii gz o template label nii gz remove label rpi nii gz dilating input labels using ball radius sct maths i label rpi nii gz o label rpi dilate nii gz dilate running home brain sct scripts sct maths py i label rpi nii gz o label rpi dilate nii gz dilate intel mkl fatal error cannot load libmkl avx so or libmkl def so home brain sct scripts sct utils py line home brain sct scripts sct utils py line sct register to template c l home brain sct testing data data labels nii gz i home brain sct testing data data nii gz t home brain sct testing data data template v s home brain sct testing data data seg nii gz r param step type seg algo slicereg metric meansquares iter step type seg algo bsplinesyn iter metric mi step iter ofolder sct register to template data error function crashed checking test sct resample checking test sct segment graymatter checking test sct smooth spinalcord checking test sct straighten spinalcord checking test sct warp template checking test sct documentation checking test sct dmri create noisemask status finished elapsed time
| 1
|
517,655
| 15,017,718,910
|
IssuesEvent
|
2021-02-01 11:13:16
|
Conjurinc-workato-dev/ldap-sync
|
https://api.github.com/repos/Conjurinc-workato-dev/ldap-sync
|
reopened
|
jira bug 11
|
Bugtype/Functionality ONYX-6559 Severity/High kind/bug priority/Default team/Jason
|
##description
Steps to reproduce:
Current Results: 222
Expected Results:
Error Messages:
Logs: fff
Other Symptoms:
Tenant ID / Pod Number:
##Found in version
11.5
##Workaround Complexity
There's a complex workaround
##Workaround Description
ssss
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6559
|
1.0
|
jira bug 11 - ##description
Steps to reproduce:
Current Results: 222
Expected Results:
Error Messages:
Logs: fff
Other Symptoms:
Tenant ID / Pod Number:
##Found in version
11.5
##Workaround Complexity
There's a complex workaround
##Workaround Description
ssss
##Link to JIRA bug
https://ca-il-jira-test.il.cyber-ark.com/browse/ONYX-6559
|
priority
|
jira bug description steps to reproduce current results expected results error messages logs fff other symptoms tenant id pod number found in version workaround complexity there s a complex workaround workaround description ssss link to jira bug
| 1
|
649,243
| 21,260,373,650
|
IssuesEvent
|
2022-04-13 03:04:58
|
RiceShelley/EtherNIC
|
https://api.github.com/repos/RiceShelley/EtherNIC
|
closed
|
Need cocotb RMII Test
|
High Priority sim
|
cocotb doesn't have a testbench for RMII already made.
We'll likely need to create one ourselves
|
1.0
|
Need cocotb RMII Test - cocotb doesn't have a testbench for RMII already made.
We'll likely need to create one ourselves
|
priority
|
need cocotb rmii test cocotb doesn t have a testbench for rmii already made we ll likely need to create one ourselves
| 1
|
604,042
| 18,676,000,013
|
IssuesEvent
|
2021-10-31 15:17:28
|
CMPUT301F21T26/Habit-Tracker
|
https://api.github.com/repos/CMPUT301F21T26/Habit-Tracker
|
closed
|
02.01.01 - Habit Event Core
|
Priority: High Base
|
**Focus**
Habit Events
**Partial US**
As a doer, I want to denote a habit event when I have done a habit as planned.
**Reason**
These instances help hold users accountable, and also act as a means of potentially making memories
**Story Points**
3
**Risk Level**
Low
|
1.0
|
02.01.01 - Habit Event Core - **Focus**
Habit Events
**Partial US**
As a doer, I want to denote a habit event when I have done a habit as planned.
**Reason**
These instances help hold users accountable, and also act as a means of potentially making memories
**Story Points**
3
**Risk Level**
Low
|
priority
|
habit event core focus habit events partial us as a doer i want to denote a habit event when i have done a habit as planned reason these instances help hold users accountable and also act as a means of potentially making memories story points risk level low
| 1
|
586,904
| 17,599,593,185
|
IssuesEvent
|
2021-08-17 10:06:37
|
margaritahumanitarian/helpafamily
|
https://api.github.com/repos/margaritahumanitarian/helpafamily
|
closed
|
DeepSource checks are failing incorrectly
|
bug good first issue help wanted high priority
|
DeepSource is a really amazing tool. Working on this issue is an opportunity to learn how to work with it. I'm happy to grant any additional access permissions needed to contributor(s) interested in working on this.
---
When contributors submit a PR, there is a DeepSource check that appears to fail incorrectly.
The PRs where this occurred are #25 #26 #28 #30 #31 #42.
Some ideas of how to get started:
- [ ] Study PR #25 which is where the check first failed
- [ ] See if fixing any of the other issues identified by DeepSource causes the DeepSource checks to pass
- [ ] If stuck, ask for help in https://discuss.deepsource.io/
|
1.0
|
DeepSource checks are failing incorrectly - DeepSource is a really amazing tool. Working on this issue is an opportunity to learn how to work with it. I'm happy to grant any additional access permissions needed to contributor(s) interested in working on this.
---
When contributors submit a PR, there is a DeepSource check that appears to fail incorrectly.
The PRs where this occurred are #25 #26 #28 #30 #31 #42.
Some ideas of how to get started:
- [ ] Study PR #25 which is where the check first failed
- [ ] See if fixing any of the other issues identified by DeepSource causes the DeepSource checks to pass
- [ ] If stuck, ask for help in https://discuss.deepsource.io/
|
priority
|
deepsource checks are failing incorrectly deepsource is a really amazing tool working on this issue is an opportunity to learn how to work with it i m happy to grant any additional access permissions needed to contributor s interested in working on this when contributors submit a pr there is a deepsource check that appears to fail incorrectly the prs where this occurred are some ideas of how to get started study pr which is where the check first failed see if fixing any of the other issues identified by deepsource causes the deepsource checks to pass if stuck ask for help in
| 1
|
565,756
| 16,768,989,469
|
IssuesEvent
|
2021-06-14 12:41:37
|
codee-team/codee-app
|
https://api.github.com/repos/codee-team/codee-app
|
closed
|
Fix API for plugins
|
enhancement priority:high
|
Now everything works due to the perpetual registration of plugins, themes, and so on. At a minimum, plugins should not be registered, so you need to implement custom receivers for scripts.
|
1.0
|
Fix API for plugins - Now everything works due to the perpetual registration of plugins, themes, and so on. At a minimum, plugins should not be registered, so you need to implement custom receivers for scripts.
|
priority
|
fix api for plugins now everything works due to the perpetual registration of plugins themes and so on at a minimum plugins should not be registered so you need to implement custom receivers for scripts
| 1
|
807,608
| 30,011,606,317
|
IssuesEvent
|
2023-06-26 15:36:30
|
C2DH/ranketwo
|
https://api.github.com/repos/C2DH/ranketwo
|
closed
|
Update yaml metadata fields in _units to reflect new scheme
|
enhancement high priority
|
This issue serves to document the pull request #246 for merging the branch: `feature/card-with-metadata`.
The branch was created by Daniele to adapt the code and fix the display of the new metadata scheme at the units level (as mentioned in #244). The first commits (DG) fixed these problems, and I will now change accordingly the metadata of all the existing lessons in order to align the metadata schema and have displayed: the authors names, the date and the source type each lesson deals with (from a controlled list).
|
1.0
|
Update yaml metadata fields in _units to reflect new scheme - This issue serves to document the pull request #246 for merging the branch: `feature/card-with-metadata`.
The branch was created by Daniele to adapt the code and fix the display of the new metadata scheme at the units level (as mentioned in #244). The first commits (DG) fixed these problems, and I will now change accordingly the metadata of all the existing lessons in order to align the metadata schema and have displayed: the authors names, the date and the source type each lesson deals with (from a controlled list).
|
priority
|
update yaml metadata fields in units to reflect new scheme this issue serves to document the pull request for merging the branch feature card with metadata the branch was created by daniele to adapt the code and fix the display of the new metadata scheme at the units level as mentioned in the first commits dg fixed these problems and i will now change accordingly the metadata of all the existing lessons in order to align the metadata schema and have displayed the authors names the date and the source type each lesson deals with from a controlled list
| 1
|
700,393
| 24,059,333,645
|
IssuesEvent
|
2022-09-16 20:20:58
|
QSI-BAQS/Jabalizer
|
https://api.github.com/repos/QSI-BAQS/Jabalizer
|
closed
|
Move "cirq_cirquits" to icm
|
high priority
|
We would like to get rid of the `cirq_circuits` directory inside `src`, as it's Python code and it doesn't really belong to the package.
These circuits, however, serve a purpose of being good sample circuits – for testing and example purposes.
A good place to keep them would be [icm package](https://github.com/QSI-BAQS/icm), as it's a dependency for Jabalizer anyway.
Some open questions:
- Which circuits exactly should we keep? WDYT @madhavkrishnan?
|
1.0
|
Move "cirq_cirquits" to icm - We would like to get rid of the `cirq_circuits` directory inside `src`, as it's Python code and it doesn't really belong to the package.
These circuits, however, serve a purpose of being good sample circuits – for testing and example purposes.
A good place to keep them would be [icm package](https://github.com/QSI-BAQS/icm), as it's a dependency for Jabalizer anyway.
Some open questions:
- Which circuits exactly should we keep? WDYT @madhavkrishnan?
|
priority
|
move cirq cirquits to icm we would like to get rid of the cirq circuits directory inside src as it s python code and it doesn t really belong to the package these circuits however serve a purpose of being good sample circuits – for testing and example purposes a good place to keep them would be as it s a dependency for jabalizer anyway some open questions which circuits exactly should we keep wdyt madhavkrishnan
| 1
|
621,343
| 19,583,293,846
|
IssuesEvent
|
2022-01-05 01:22:11
|
loveology/design
|
https://api.github.com/repos/loveology/design
|
reopened
|
Expert Profiles - Update the profile photos, resources, and links
|
enhancement High Priority
|
If we haven't received content from an expert, we'll reach out to them.
Similar
- [x] Need a better picture for Ken Coleman and John Townsend. From Les "I think he’ll spaz if he sees this photo."
- [x] "Meg Meeker has no books."
|
1.0
|
Expert Profiles - Update the profile photos, resources, and links - If we haven't received content from an expert, we'll reach out to them.
Similar
- [x] Need a better picture for Ken Coleman and John Townsend. From Les "I think he’ll spaz if he sees this photo."
- [x] "Meg Meeker has no books."
|
priority
|
expert profiles update the profile photos resources and links if we haven t received content from an expert we ll reach out to them similar need a better picture for ken coleman and john townsend from les i think he’ll spaz if he sees this photo meg meeker has no books
| 1
|
55,623
| 3,074,011,230
|
IssuesEvent
|
2015-08-20 02:45:50
|
canadainc/ilmtest
|
https://api.github.com/repos/canadainc/ilmtest
|
opened
|
Implement random Surah verses question
|
enhancement logic Priority-High task
|
Question: Which of the following ayats are found in Al-Fatiha?
[A] All praise is due to Allah...
[B] Guide us to the straight...
[C] The trees prostrate to...
[D] I seek refuge in the Lord of the daylight...
|
1.0
|
Implement random Surah verses question - Question: Which of the following ayats are found in Al-Fatiha?
[A] All praise is due to Allah...
[B] Guide us to the straight...
[C] The trees prostrate to...
[D] I seek refuge in the Lord of the daylight...
|
priority
|
implement random surah verses question question which of the following ayats are found in al fatiha all praise is due to allah guide us to the straight the trees prostrate to i seek refuge in the lord of the daylight
| 1
|
336,331
| 10,185,991,311
|
IssuesEvent
|
2019-08-10 09:06:53
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
Hunter's Tranquilizing Shot cannot dispell Enrage(Spell ID 19451)
|
Class - Hunter Priority - High
|
<!-- IF YOU DO NOT FILL THIS TEMPLATE OUT, WE WILL CLOSE YOUR ISSUE! -->
<!-- This template is for problem reports, for feature suggestion etc... feel free to edit it.
If this is a crash report, upload the crashlog on https://gist.github.com/
For issues containing a fix, please create a Pull Request following this tutorial: http://www.azerothcore.org/wiki/Contribute#how-to-create-a-pull-request -->
<!-- WRITE A RELEVANT TITLE -->
##### SMALL DESCRIPTION:
Players on my server found this issue when they trying to down Magmadar in Molten Core.
##### EXPECTED BLIZZLIKE BEHAVIOUR:
Tranquilizing Shot should be able to dispell Enrage from Magmadar.
##### CURRENT BEHAVIOUR:
I tried to use GM account with another guy to test both Tranquilizing Shot and Enrage from Magmadar. Tranquilizing Shot can dispell buffs like Berserker Rage from Warrior, but it cannot dispell Enrage from Magmadar. (GM account learned 19451 and test it with a hunter.)
##### STEPS TO REPRODUCE THE PROBLEM:
<!-- Describe precisely how to reproduce the bug so we can fix it or confirm its existence:
- Which commands to use? Which NPC to teleport to?
- Do we need to have debug flags on Cmake?
- Do we need to look at the console while the bug happens?
- Other steps
-->
1. .go c id 11982
2. aggro the boss
3. try Tranquilizing Shot when boss enraged.
4. .learn 19451
5. test it with a hunter using Tranquilizing Shot.
##### EXTRA NOTES:
<!--
Any information that can help the developers to identify and fix the issue should be put here.
Examples:
- was this bug always present in AzerothCore? if it was introduced after a change, please mention it
- the code line(s) that cause the issue
- does this feature work in other server appplications (e.g. CMaNGOS, TrinityCore, etc...) ?
-->
It seems nothing wrong with Spell.dbc, since there is no such issue with TC using the same dbc files, maybe a bug in scripts?
##### BRANCH(ES):
<!-- Specify the branch(es) affected by this issue: master, 0.x, 1.x, or another branch. -->
master
##### AC HASH/COMMIT:
<!-- IF YOU DO NOT FILL THIS OUT, WE WILL CLOSE YOUR ISSUE! NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD.
Find the commit hash (unique identifier) by running "git log" on your own clone of AzerothCore or by looking at here https://github.com/azerothcore/azerothcore-wotlk/commits/master -->
AzerothCore rev. 15bd8f544097 2019-07-20 14:36:14 +0200 (master branch) (Unix, Release) (worldserver-daemon)
##### OPERATING SYSTEM:
<!-- Windows 7/10, Debian 8/9/10, Ubuntu 16/18 etc... -->
CentOS 7
##### MODULES:
<!-- Are you using modules? If yes, list them (note them down in a .txt for opening future issues) -->
no.
##### OTHER CUSTOMIZATIONS:
<!-- Are you using any extra script?
- Did you apply any core patch/diff?
- Did you modify your database?
- Or do you have other customizations? If yes please specify them here.
-->
no.
<!-- ------------------------- THE END ------------------------------
Thank you for your contribution.
If you use AzerothCore regularly, we really NEED your help to:
- TEST our fixes ( http://www.azerothcore.org/wiki/Contribute#how-to-test-a-pull-request )
- Report issues
- Improve the documentation/wiki
With your help the project can evolve much quicker!
-->
<!-- NOTE: If you intend to contribute more than once, you should really join us on our discord channel! We set cosmetic ranks for our contributors and may give access to special resources/knowledge to them! The link is on our site http://azerothcore.org/
-->
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/77531728-hunter-s-tranquilizing-shot-cannot-dispell-enrage-spell-id-19451?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Hunter's Tranquilizing Shot cannot dispell Enrage(Spell ID 19451) - <!-- IF YOU DO NOT FILL THIS TEMPLATE OUT, WE WILL CLOSE YOUR ISSUE! -->
<!-- This template is for problem reports, for feature suggestion etc... feel free to edit it.
If this is a crash report, upload the crashlog on https://gist.github.com/
For issues containing a fix, please create a Pull Request following this tutorial: http://www.azerothcore.org/wiki/Contribute#how-to-create-a-pull-request -->
<!-- WRITE A RELEVANT TITLE -->
##### SMALL DESCRIPTION:
Players on my server found this issue when they trying to down Magmadar in Molten Core.
##### EXPECTED BLIZZLIKE BEHAVIOUR:
Tranquilizing Shot should be able to dispell Enrage from Magmadar.
##### CURRENT BEHAVIOUR:
I tried to use GM account with another guy to test both Tranquilizing Shot and Enrage from Magmadar. Tranquilizing Shot can dispell buffs like Berserker Rage from Warrior, but it cannot dispell Enrage from Magmadar. (GM account learned 19451 and test it with a hunter.)
##### STEPS TO REPRODUCE THE PROBLEM:
<!-- Describe precisely how to reproduce the bug so we can fix it or confirm its existence:
- Which commands to use? Which NPC to teleport to?
- Do we need to have debug flags on Cmake?
- Do we need to look at the console while the bug happens?
- Other steps
-->
1. .go c id 11982
2. aggro the boss
3. try Tranquilizing Shot when boss enraged.
4. .learn 19451
5. test it with a hunter using Tranquilizing Shot.
##### EXTRA NOTES:
<!--
Any information that can help the developers to identify and fix the issue should be put here.
Examples:
- was this bug always present in AzerothCore? if it was introduced after a change, please mention it
- the code line(s) that cause the issue
- does this feature work in other server appplications (e.g. CMaNGOS, TrinityCore, etc...) ?
-->
It seems nothing wrong with Spell.dbc, since there is no such issue with TC using the same dbc files, maybe a bug in scripts?
##### BRANCH(ES):
<!-- Specify the branch(es) affected by this issue: master, 0.x, 1.x, or another branch. -->
master
##### AC HASH/COMMIT:
<!-- IF YOU DO NOT FILL THIS OUT, WE WILL CLOSE YOUR ISSUE! NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD.
Find the commit hash (unique identifier) by running "git log" on your own clone of AzerothCore or by looking at here https://github.com/azerothcore/azerothcore-wotlk/commits/master -->
AzerothCore rev. 15bd8f544097 2019-07-20 14:36:14 +0200 (master branch) (Unix, Release) (worldserver-daemon)
##### OPERATING SYSTEM:
<!-- Windows 7/10, Debian 8/9/10, Ubuntu 16/18 etc... -->
CentOS 7
##### MODULES:
<!-- Are you using modules? If yes, list them (note them down in a .txt for opening future issues) -->
no.
##### OTHER CUSTOMIZATIONS:
<!-- Are you using any extra script?
- Did you apply any core patch/diff?
- Did you modify your database?
- Or do you have other customizations? If yes please specify them here.
-->
no.
<!-- ------------------------- THE END ------------------------------
Thank you for your contribution.
If you use AzerothCore regularly, we really NEED your help to:
- TEST our fixes ( http://www.azerothcore.org/wiki/Contribute#how-to-test-a-pull-request )
- Report issues
- Improve the documentation/wiki
With your help the project can evolve much quicker!
-->
<!-- NOTE: If you intend to contribute more than once, you should really join us on our discord channel! We set cosmetic ranks for our contributors and may give access to special resources/knowledge to them! The link is on our site http://azerothcore.org/
-->
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/77531728-hunter-s-tranquilizing-shot-cannot-dispell-enrage-spell-id-19451?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
priority
|
hunter s tranquilizing shot cannot dispell enrage spell id this template is for problem reports for feature suggestion etc feel free to edit it if this is a crash report upload the crashlog on for issues containing a fix please create a pull request following this tutorial small description players on my server found this issue when they trying to down magmadar in molten core expected blizzlike behaviour tranquilizing shot should be able to dispell enrage from magmadar current behaviour i tried to use gm account with another guy to test both tranquilizing shot and enrage from magmadar tranquilizing shot can dispell buffs like berserker rage from warrior but it cannot dispell enrage from magmadar gm account learned and test it with a hunter steps to reproduce the problem describe precisely how to reproduce the bug so we can fix it or confirm its existence which commands to use which npc to teleport to do we need to have debug flags on cmake do we need to look at the console while the bug happens other steps go c id aggro the boss try tranquilizing shot when boss enraged learn test it with a hunter using tranquilizing shot extra notes any information that can help the developers to identify and fix the issue should be put here examples was this bug always present in azerothcore if it was introduced after a change please mention it the code line s that cause the issue does this feature work in other server appplications e g cmangos trinitycore etc it seems nothing wrong with spell dbc since there is no such issue with tc using the same dbc files maybe a bug in scripts branch es master ac hash commit if you do not fill this out we will close your issue never write latest always put the actual value instead find the commit hash unique identifier by running git log on your own clone of azerothcore or by looking at here azerothcore rev master branch unix release worldserver daemon operating system centos modules no other customizations are you using any extra script did you apply any core patch diff did you modify your database or do you have other customizations if yes please specify them here no the end thank you for your contribution if you use azerothcore regularly we really need your help to test our fixes report issues improve the documentation wiki with your help the project can evolve much quicker note if you intend to contribute more than once you should really join us on our discord channel we set cosmetic ranks for our contributors and may give access to special resources knowledge to them the link is on our site want to back this issue we accept bounties via
| 1
|
636,502
| 20,601,900,684
|
IssuesEvent
|
2022-03-06 11:52:09
|
bounswe/bounswe2022group5
|
https://api.github.com/repos/bounswe/bounswe2022group5
|
closed
|
Creating a Discord Channel for weekly group meetings
|
High Priority Category - Communication
|
## Communication Platform: Discord
To have better communication during our weekly meetings, we will use a Discord channel. Discord will be useful in the context of in-meeting documentation especially. Some benefits of using Discord:
- We will be able to access chat for previous meetings.
- We might develop different Discord bots when an according need arises.
|
1.0
|
Creating a Discord Channel for weekly group meetings - ## Communication Platform: Discord
To have better communication during our weekly meetings, we will use a Discord channel. Discord will be useful in the context of in-meeting documentation especially. Some benefits of using Discord:
- We will be able to access chat for previous meetings.
- We might develop different Discord bots when an according need arises.
|
priority
|
creating a discord channel for weekly group meetings communication platform discord to have better communication during our weekly meetings we will use a discord channel discord will be useful in the context of in meeting documentation especially some benefits of using discord we will be able to access chat for previous meetings we might develop different discord bots when an according need arises
| 1
|
316,836
| 9,657,965,467
|
IssuesEvent
|
2019-05-20 09:49:26
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
The "Create new style" popup does not appear in Safari
|
1 Point Priority: High StyleEditor bug geonode_integration
|
### Description
A few sentences describing the overall goals of the issue.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [x] Safari
*Browser Version Affected*
- Indicate the browser version in which the issue has been found
*Steps to reproduce*
- user needs to login as admin and MapStore configured with GeoServer to edit styles
- open a map with a layer
- select the layer from TOC and open settings
- select the style tab
- click on create new style
- select a template
- click on plus button
*Expected Result*
- The 'Create new style' modal is visible
*Current Result*
- The 'Create new style' modal is not visible
### Other useful information (optional):
While clicking on the plus button "Add selected template to list of styles" the popup does not appear in Safari. It works in Chrome for MacOS.
<img width="1686" alt="Screenshot 2019-04-10 at 12 10 12" src="https://user-images.githubusercontent.com/3024454/55870871-cb3e0f80-5b89-11e9-8d5a-9e8f2498c0c3.png">
|
1.0
|
The "Create new style" popup does not appear in Safari - ### Description
A few sentences describing the overall goals of the issue.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [x] Safari
*Browser Version Affected*
- Indicate the browser version in which the issue has been found
*Steps to reproduce*
- user needs to login as admin and MapStore configured with GeoServer to edit styles
- open a map with a layer
- select the layer from TOC and open settings
- select the style tab
- click on create new style
- select a template
- click on plus button
*Expected Result*
- The 'Create new style' modal is visible
*Current Result*
- The 'Create new style' modal is not visible
### Other useful information (optional):
While clicking on the plus button "Add selected template to list of styles" the popup does not appear in Safari. It works in Chrome for MacOS.
<img width="1686" alt="Screenshot 2019-04-10 at 12 10 12" src="https://user-images.githubusercontent.com/3024454/55870871-cb3e0f80-5b89-11e9-8d5a-9e8f2498c0c3.png">
|
priority
|
the create new style popup does not appear in safari description a few sentences describing the overall goals of the issue in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari browser version affected indicate the browser version in which the issue has been found steps to reproduce user needs to login as admin and mapstore configured with geoserver to edit styles open a map with a layer select the layer from toc and open settings select the style tab click on create new style select a template click on plus button expected result the create new style modal is visible current result the create new style modal is not visible other useful information optional while clicking on the plus button add selected template to list of styles the popup does not appear in safari it works in chrome for macos img width alt screenshot at src
| 1
|
29,005
| 2,712,810,355
|
IssuesEvent
|
2015-04-09 15:45:03
|
mavoine/tarsius
|
https://api.github.com/repos/mavoine/tarsius
|
closed
|
import: add a few options
|
auto-migrated Priority-High Type-Enhancement
|
```
Add some options for the import dialog.
File ops:
* Copy photos to the gallery
* Move photos to the gallery
Organization:
* Organize my photos for me (folder hierarchy like yyyy/mm/dd)
* Let me choose where I want them (browse...)
If a photo is already under the gallery's photo directory, ask the user
what he wants to do.
```
Original issue reported on code.google.com by `avoin...@gmail.com` on 19 Jan 2010 at 6:23
|
1.0
|
import: add a few options - ```
Add some options for the import dialog.
File ops:
* Copy photos to the gallery
* Move photos to the gallery
Organization:
* Organize my photos for me (folder hierarchy like yyyy/mm/dd)
* Let me choose where I want them (browse...)
If a photo is already under the gallery's photo directory, ask the user
what he wants to do.
```
Original issue reported on code.google.com by `avoin...@gmail.com` on 19 Jan 2010 at 6:23
|
priority
|
import add a few options add some options for the import dialog file ops copy photos to the gallery move photos to the gallery organization organize my photos for me folder hierarchy like yyyy mm dd let me choose where i want them browse if a photo is already under the gallery s photo directory ask the user what he wants to do original issue reported on code google com by avoin gmail com on jan at
| 1
|
370,197
| 10,926,633,860
|
IssuesEvent
|
2019-11-22 15:06:15
|
woocommerce/woocommerce-admin
|
https://api.github.com/repos/woocommerce/woocommerce-admin
|
closed
|
Analytics Terms: Rename Total Sales, Recalculate Gross Sales
|
Analytics [Priority] High
|
Make changes outlined in p90Yrv-1ef-p2 to modify Analytics terms. Previously attempted in https://github.com/woocommerce/woocommerce-admin/pull/3104, the changes will require a database update.
* Rename `gross_total` to `total_sales`
* Create a new column `gross_sales` defined as
```
gross_sales = total_sales + refunds + coupons - tax - shipping
```
* Reintroduce the `coupon_total` column, or figure out how to use the JOIN to calculate the value.
|
1.0
|
Analytics Terms: Rename Total Sales, Recalculate Gross Sales - Make changes outlined in p90Yrv-1ef-p2 to modify Analytics terms. Previously attempted in https://github.com/woocommerce/woocommerce-admin/pull/3104, the changes will require a database update.
* Rename `gross_total` to `total_sales`
* Create a new column `gross_sales` defined as
```
gross_sales = total_sales + refunds + coupons - tax - shipping
```
* Reintroduce the `coupon_total` column, or figure out how to use the JOIN to calculate the value.
|
priority
|
analytics terms rename total sales recalculate gross sales make changes outlined in to modify analytics terms previously attempted in the changes will require a database update rename gross total to total sales create a new column gross sales defined as gross sales total sales refunds coupons tax shipping reintroduce the coupon total column or figure out how to use the join to calculate the value
| 1
|
170,158
| 6,425,336,455
|
IssuesEvent
|
2017-08-09 15:13:22
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
closed
|
[Sentry]: (psycopg2.ProgrammingError) relation "export_jobs" already exists
|
bug has-PR Priority: High
|
```
ProgrammingError: (psycopg2.ProgrammingError) relation "export_jobs" already exists
[SQL: '\nCREATE TABLE export_jobs (\n\tid SERIAL NOT NULL, \n\ttask VARCHAR NOT NULL, \n\tstarts_at TIMESTAMP WITH TIME ZONE, \n\tuser_email VARCHAR, \n\tevent_id INTEGER, \n\tPRIMARY KEY (id), \n\tFOREIGN KEY(event_id) REFERENCES events (id) ON DELETE CASCADE\n)\n\n']
(25 additional frame(s) were not displayed)
...
File "sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
```
|
1.0
|
[Sentry]: (psycopg2.ProgrammingError) relation "export_jobs" already exists - ```
ProgrammingError: (psycopg2.ProgrammingError) relation "export_jobs" already exists
[SQL: '\nCREATE TABLE export_jobs (\n\tid SERIAL NOT NULL, \n\ttask VARCHAR NOT NULL, \n\tstarts_at TIMESTAMP WITH TIME ZONE, \n\tuser_email VARCHAR, \n\tevent_id INTEGER, \n\tPRIMARY KEY (id), \n\tFOREIGN KEY(event_id) REFERENCES events (id) ON DELETE CASCADE\n)\n\n']
(25 additional frame(s) were not displayed)
...
File "sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
```
|
priority
|
programmingerror relation export jobs already exists programmingerror programmingerror relation export jobs already exists additional frame s were not displayed file sqlalchemy engine base py line in execute context context file sqlalchemy engine base py line in handle dbapi exception exc info file sqlalchemy util compat py line in raise from cause reraise type exception exception tb exc tb cause cause file sqlalchemy engine base py line in execute context context file sqlalchemy engine default py line in do execute cursor execute statement parameters
| 1
|
818,399
| 30,687,339,665
|
IssuesEvent
|
2023-07-26 13:11:03
|
ladybirdweb/agora-invoicing-community
|
https://api.github.com/repos/ladybirdweb/agora-invoicing-community
|
closed
|
Login page Text field height
|
Bug High Priority UI/UX size/XS
|
<img width="495" alt="Screenshot 2023-07-11 at 11 28 20 PM" src="https://github.com/ladybirdweb/agora-invoicing-community/assets/240898/c112df4f-6647-4684-83b1-9147d53f65ba">
- Height of text field is looking smaller in comparison to other places
May be it's and illusion, please check and confirm once
|
1.0
|
Login page Text field height - <img width="495" alt="Screenshot 2023-07-11 at 11 28 20 PM" src="https://github.com/ladybirdweb/agora-invoicing-community/assets/240898/c112df4f-6647-4684-83b1-9147d53f65ba">
- Height of text field is looking smaller in comparison to other places
May be it's and illusion, please check and confirm once
|
priority
|
login page text field height img width alt screenshot at pm src height of text field is looking smaller in comparison to other places may be it s and illusion please check and confirm once
| 1
|
641,163
| 20,819,362,848
|
IssuesEvent
|
2022-03-18 13:55:46
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
[UI] Missing space between devportal pending application message
|
Type/Bug Priority/High React-UI APIM - 4.1.0
|
### Description:
See below image.

### Steps to reproduce:
1. Enable workflow approval for application creation
2. Create an application
Check the application list
|
1.0
|
[UI] Missing space between devportal pending application message - ### Description:
See below image.

### Steps to reproduce:
1. Enable workflow approval for application creation
2. Create an application
Check the application list
|
priority
|
missing space between devportal pending application message description see below image steps to reproduce enable workflow approval for application creation create an application check the application list
| 1
|
684,117
| 23,407,628,276
|
IssuesEvent
|
2022-08-12 14:17:57
|
Jexactyl/Jexactyl
|
https://api.github.com/repos/Jexactyl/Jexactyl
|
closed
|
Admin pages takes way too long to load
|
bug High priority
|
### Current Behavior
If more than 20 or 100 servers are on the panel the /admin might take 6 secs to over 20 secs long to load which is insane!
### Expected Behavior
I think its pretty obvious here that it should take under 1 sec to load
### Steps to Reproduce
Get 20 or over 100 servers to run on it and most of them should be online
### Panel Version
3.3.1
### Wings Version
1.7.0
### Games and/or Eggs Affected
_No response_
### Docker Image
_No response_
### Error Logs
_No response_
### Is there an existing issue for this?
- [X] I have searched the existing issues before opening this issue.
- [X] I have provided all relevant details, including the specific game and Docker images I am using if this issue is related to running a server.
- [X] I have checked in the Discord server and believe this is a bug with the software, and not a configuration issue with my specific system.
|
1.0
|
Admin pages takes way too long to load - ### Current Behavior
If more than 20 or 100 servers are on the panel the /admin might take 6 secs to over 20 secs long to load which is insane!
### Expected Behavior
I think its pretty obvious here that it should take under 1 sec to load
### Steps to Reproduce
Get 20 or over 100 servers to run on it and most of them should be online
### Panel Version
3.3.1
### Wings Version
1.7.0
### Games and/or Eggs Affected
_No response_
### Docker Image
_No response_
### Error Logs
_No response_
### Is there an existing issue for this?
- [X] I have searched the existing issues before opening this issue.
- [X] I have provided all relevant details, including the specific game and Docker images I am using if this issue is related to running a server.
- [X] I have checked in the Discord server and believe this is a bug with the software, and not a configuration issue with my specific system.
|
priority
|
admin pages takes way too long to load current behavior if more than or servers are on the panel the admin might take secs to over secs long to load which is insane expected behavior i think its pretty obvious here that it should take under sec to load steps to reproduce get or over servers to run on it and most of them should be online panel version wings version games and or eggs affected no response docker image no response error logs no response is there an existing issue for this i have searched the existing issues before opening this issue i have provided all relevant details including the specific game and docker images i am using if this issue is related to running a server i have checked in the discord server and believe this is a bug with the software and not a configuration issue with my specific system
| 1
|
227,585
| 7,539,622,314
|
IssuesEvent
|
2018-04-17 01:36:45
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Connection Failed: "Invalid login info. Invalid parameter".
|
High Priority
|
I successfully created a game on LAN to co-op with my buddy, but my name has always been "user" in the game. I cannot figure out how to change it, nor am I able to log in or change my password to the eco servers in game, and I cannot join any multiplayer games on other servers, or else I get the error shown in title. I created an account on the Eco website, bruisefest. Game is run through Steam.
|
1.0
|
Connection Failed: "Invalid login info. Invalid parameter". - I successfully created a game on LAN to co-op with my buddy, but my name has always been "user" in the game. I cannot figure out how to change it, nor am I able to log in or change my password to the eco servers in game, and I cannot join any multiplayer games on other servers, or else I get the error shown in title. I created an account on the Eco website, bruisefest. Game is run through Steam.
|
priority
|
connection failed invalid login info invalid parameter i successfully created a game on lan to co op with my buddy but my name has always been user in the game i cannot figure out how to change it nor am i able to log in or change my password to the eco servers in game and i cannot join any multiplayer games on other servers or else i get the error shown in title i created an account on the eco website bruisefest game is run through steam
| 1
|
352,043
| 10,526,973,494
|
IssuesEvent
|
2019-09-30 18:16:43
|
boston-microgreens/grow-app-project
|
https://api.github.com/repos/boston-microgreens/grow-app-project
|
opened
|
Customer login & account creation
|
back-end front-end priority-high
|
- Allow customers to register an account with orders app
- Allow customers to securely log in with their credentials
|
1.0
|
Customer login & account creation - - Allow customers to register an account with orders app
- Allow customers to securely log in with their credentials
|
priority
|
customer login account creation allow customers to register an account with orders app allow customers to securely log in with their credentials
| 1
|
581,577
| 17,296,467,033
|
IssuesEvent
|
2021-07-25 20:44:14
|
DistributedCollective/Sovryn-smart-contracts
|
https://api.github.com/repos/DistributedCollective/Sovryn-smart-contracts
|
closed
|
Fix unmatch expectRevert error message
|
high priority maintenance
|
For hardhat test, we don't need to add "revert" keyword at the start of expected error message in expectRevert function
|
1.0
|
Fix unmatch expectRevert error message - For hardhat test, we don't need to add "revert" keyword at the start of expected error message in expectRevert function
|
priority
|
fix unmatch expectrevert error message for hardhat test we don t need to add revert keyword at the start of expected error message in expectrevert function
| 1
|
165,146
| 6,264,286,855
|
IssuesEvent
|
2017-07-16 06:13:20
|
botpress/botpress
|
https://api.github.com/repos/botpress/botpress
|
opened
|
Notifications and logs should be moved to database
|
bug priority/high
|
They are currently stored on the file system
|
1.0
|
Notifications and logs should be moved to database - They are currently stored on the file system
|
priority
|
notifications and logs should be moved to database they are currently stored on the file system
| 1
|
260,567
| 8,211,761,491
|
IssuesEvent
|
2018-09-04 14:36:35
|
hpcugent/vsc_user_docs
|
https://api.github.com/repos/hpcugent/vsc_user_docs
|
closed
|
section of "module swap cluster" should stand out more & be updated/extended
|
Jasper (HPC-UGent student intern) priority:high
|
* currently still mentions `cluster/raichu`, which no longer exists
* should also mention `module avail cluster/`
* to clarify: output of `qsub`, `qstat`, etc. depends on which `cluster` module is loaded
|
1.0
|
section of "module swap cluster" should stand out more & be updated/extended - * currently still mentions `cluster/raichu`, which no longer exists
* should also mention `module avail cluster/`
* to clarify: output of `qsub`, `qstat`, etc. depends on which `cluster` module is loaded
|
priority
|
section of module swap cluster should stand out more be updated extended currently still mentions cluster raichu which no longer exists should also mention module avail cluster to clarify output of qsub qstat etc depends on which cluster module is loaded
| 1
|
491,995
| 14,174,945,437
|
IssuesEvent
|
2020-11-12 20:44:58
|
Sage-Bionetworks/sageseqr
|
https://api.github.com/repos/Sage-Bionetworks/sageseqr
|
closed
|
Implement workaround to make() wrapped in a function
|
high priority wontfix
|
When function wraps `make()`, the environment must be passed explicitly.
|
1.0
|
Implement workaround to make() wrapped in a function - When function wraps `make()`, the environment must be passed explicitly.
|
priority
|
implement workaround to make wrapped in a function when function wraps make the environment must be passed explicitly
| 1
|
1,790
| 2,519,831,875
|
IssuesEvent
|
2015-01-18 12:20:27
|
SiCKRAGETV/sickrage-issues
|
https://api.github.com/repos/SiCKRAGETV/sickrage-issues
|
closed
|
Crash While Viewing General Settings
|
1: Bug / issue 2: High Priority 5: Duplicate branch: develop
|
Branch: Develop
OS: FreeBSD 9.3-RELEASE-p5
commit: 8b84e4f4bc6cdebdc605438e9e75d15f1133d65e
Python Version: 2.7.9 (default, Jan 9 2015, 14:27:53) [GCC 4.2.1 20070831 patched [FreeBSD]]
What I do: I try to navigate to /config/general/ via anyway possible (directly by typing or via the buttons)
What I expect: it brings me to the general settings page
What it does: crashes
Repeatable: yes
Sickbeard log:
2015-01-11 22:48:11 Thread-112 :: Failed doing webui callback: Traceback (most recent call last):
File "/usr/local/sickrage/sickbeard/webserve.py", line 230, in async_call
result = function(**kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 3390, in index
return t.respond()
File "/usr/local/sickrage/cache/cheetah/_usr_local_sickrage_gui_slick_interfaces_default_config_general_tmpl.py", line 952, in respond
for cur_branch in VFN(VFFSL(SL,"sickbeard.versionCheckScheduler.action",True),"list_remote_branches",False)(): # generated from line 521, col 10
TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):
File "/usr/local/sickrage/tornado/web.py", line 1292, in _stack_context_handle_exception raise_exc_info((type, value, traceback))
File "/usr/local/sickrage/tornado/stack_context.py", line 314, in wrapped ret = fn(*args, **kwargs)
File "/usr/local/sickrage/tornado/concurrent.py", line 226, in lambda future: callback(future.result()))
File "/usr/local/sickrage/lib/concurrent/futures/_base.py", line 400, in result return self.__get_result()
File "/usr/local/sickrage/lib/concurrent/futures/_base.py", line 359, in __get_result reraise(self._exception, self._traceback)
File "/usr/local/sickrage/lib/concurrent/futures/_compat.py", line 107, in reraise exec('raise exc_type, exc_value, traceback', {}, locals_)
File "/usr/local/sickrage/lib/concurrent/futures/thread.py", line 61, in run result = self.fn(*self.args, **self.kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 230, in async_call result = function(**kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 3390, in index return t.respond()
File "/usr/local/sickrage/cache/cheetah/_usr_local_sickrage_gui_slick_interfaces_default_config_general_tmpl.py", line 952, in respond for cur_branch in VFN(VFFSL(SL,"sickbeard.versionCheckScheduler.action",True),"list_remote_branches",False)(): # generated from line 521, col 10
TypeError: 'NoneType' object is not iterable
Request Info
body:
files: {}
protocol: http
connection:
body_arguments: {}
uri: /config/general/
query_arguments: {}
_start_time: 1421034491.1
headers: {'Accept-Language': 'en-US,en;q=0.8,de;q=0.6', 'Accept-Encoding': 'gzip, deflate, sdch', 'Connection': 'keep-alive', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'Dnt': '1', 'Host': '192.168.1.100:8081', 'Referer': 'http://192.168.1.100:8081/config/', 'Cookie': 'wanted_view2=details; advanced_toggle_checked=0; plushContainerWidth=100%25; plushMultiOps=1; plushNoTopMenu=0'}
host: 192.168.1.100:8081
version: HTTP/1.1
arguments: {}
_finish_time: None
query:
path: /config/general/
method: GET
remote_ip: 192.168.1.114
|
1.0
|
Crash While Viewing General Settings - Branch: Develop
OS: FreeBSD 9.3-RELEASE-p5
commit: 8b84e4f4bc6cdebdc605438e9e75d15f1133d65e
Python Version: 2.7.9 (default, Jan 9 2015, 14:27:53) [GCC 4.2.1 20070831 patched [FreeBSD]]
What I do: I try to navigate to /config/general/ via anyway possible (directly by typing or via the buttons)
What I expect: it brings me to the general settings page
What it does: crashes
Repeatable: yes
Sickbeard log:
2015-01-11 22:48:11 Thread-112 :: Failed doing webui callback: Traceback (most recent call last):
File "/usr/local/sickrage/sickbeard/webserve.py", line 230, in async_call
result = function(**kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 3390, in index
return t.respond()
File "/usr/local/sickrage/cache/cheetah/_usr_local_sickrage_gui_slick_interfaces_default_config_general_tmpl.py", line 952, in respond
for cur_branch in VFN(VFFSL(SL,"sickbeard.versionCheckScheduler.action",True),"list_remote_branches",False)(): # generated from line 521, col 10
TypeError: 'NoneType' object is not iterable
Traceback (most recent call last):
File "/usr/local/sickrage/tornado/web.py", line 1292, in _stack_context_handle_exception raise_exc_info((type, value, traceback))
File "/usr/local/sickrage/tornado/stack_context.py", line 314, in wrapped ret = fn(*args, **kwargs)
File "/usr/local/sickrage/tornado/concurrent.py", line 226, in lambda future: callback(future.result()))
File "/usr/local/sickrage/lib/concurrent/futures/_base.py", line 400, in result return self.__get_result()
File "/usr/local/sickrage/lib/concurrent/futures/_base.py", line 359, in __get_result reraise(self._exception, self._traceback)
File "/usr/local/sickrage/lib/concurrent/futures/_compat.py", line 107, in reraise exec('raise exc_type, exc_value, traceback', {}, locals_)
File "/usr/local/sickrage/lib/concurrent/futures/thread.py", line 61, in run result = self.fn(*self.args, **self.kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 230, in async_call result = function(**kwargs)
File "/usr/local/sickrage/sickbeard/webserve.py", line 3390, in index return t.respond()
File "/usr/local/sickrage/cache/cheetah/_usr_local_sickrage_gui_slick_interfaces_default_config_general_tmpl.py", line 952, in respond for cur_branch in VFN(VFFSL(SL,"sickbeard.versionCheckScheduler.action",True),"list_remote_branches",False)(): # generated from line 521, col 10
TypeError: 'NoneType' object is not iterable
Request Info
body:
files: {}
protocol: http
connection:
body_arguments: {}
uri: /config/general/
query_arguments: {}
_start_time: 1421034491.1
headers: {'Accept-Language': 'en-US,en;q=0.8,de;q=0.6', 'Accept-Encoding': 'gzip, deflate, sdch', 'Connection': 'keep-alive', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'Dnt': '1', 'Host': '192.168.1.100:8081', 'Referer': 'http://192.168.1.100:8081/config/', 'Cookie': 'wanted_view2=details; advanced_toggle_checked=0; plushContainerWidth=100%25; plushMultiOps=1; plushNoTopMenu=0'}
host: 192.168.1.100:8081
version: HTTP/1.1
arguments: {}
_finish_time: None
query:
path: /config/general/
method: GET
remote_ip: 192.168.1.114
|
priority
|
crash while viewing general settings branch develop os freebsd release commit python version default jan what i do i try to navigate to config general via anyway possible directly by typing or via the buttons what i expect it brings me to the general settings page what it does crashes repeatable yes sickbeard log thread failed doing webui callback traceback most recent call last file usr local sickrage sickbeard webserve py line in async call result function kwargs file usr local sickrage sickbeard webserve py line in index return t respond file usr local sickrage cache cheetah usr local sickrage gui slick interfaces default config general tmpl py line in respond for cur branch in vfn vffsl sl sickbeard versioncheckscheduler action true list remote branches false generated from line col typeerror nonetype object is not iterable traceback most recent call last file usr local sickrage tornado web py line in stack context handle exception raise exc info type value traceback file usr local sickrage tornado stack context py line in wrapped ret fn args kwargs file usr local sickrage tornado concurrent py line in lambda future callback future result file usr local sickrage lib concurrent futures base py line in result return self get result file usr local sickrage lib concurrent futures base py line in get result reraise self exception self traceback file usr local sickrage lib concurrent futures compat py line in reraise exec raise exc type exc value traceback locals file usr local sickrage lib concurrent futures thread py line in run result self fn self args self kwargs file usr local sickrage sickbeard webserve py line in async call result function kwargs file usr local sickrage sickbeard webserve py line in index return t respond file usr local sickrage cache cheetah usr local sickrage gui slick interfaces default config general tmpl py line in respond for cur branch in vfn vffsl sl sickbeard versioncheckscheduler action true list remote branches false generated from line col typeerror nonetype object is not iterable request info body files protocol http connection body arguments uri config general query arguments start time headers accept language en us en q de q accept encoding gzip deflate sdch connection keep alive accept text html application xhtml xml application xml q image webp q user agent mozilla windows nt applewebkit khtml like gecko chrome safari dnt host referer cookie wanted details advanced toggle checked plushcontainerwidth plushmultiops plushnotopmenu host version http arguments finish time none query path config general method get remote ip
| 1
|
815,498
| 30,558,061,525
|
IssuesEvent
|
2023-07-20 13:00:10
|
quotientbot/Quotient-Bot
|
https://api.github.com/repos/quotientbot/Quotient-Bot
|
closed
|
AttributeError in deleting scrims without reg channels
|
bug good first issue priority:high
|
https://github.com/quotientbot/Quotient-Bot/blob/5b5836198d7a0f804f4652232407ed8acb414aa7/src/models/esports/scrims.py#L482C3-L482C3
If the scrim's registration channel has been deleted, scrim.registration_channel will return None, hence throwing an `AttributeError`, in this line we want to directly reference `scrim.registration_channel_id`.
|
1.0
|
AttributeError in deleting scrims without reg channels - https://github.com/quotientbot/Quotient-Bot/blob/5b5836198d7a0f804f4652232407ed8acb414aa7/src/models/esports/scrims.py#L482C3-L482C3
If the scrim's registration channel has been deleted, scrim.registration_channel will return None, hence throwing an `AttributeError`, in this line we want to directly reference `scrim.registration_channel_id`.
|
priority
|
attributeerror in deleting scrims without reg channels if the scrim s registration channel has been deleted scrim registration channel will return none hence throwing an attributeerror in this line we want to directly reference scrim registration channel id
| 1
|
328,498
| 9,995,613,736
|
IssuesEvent
|
2019-07-11 20:44:11
|
DMTF/Redfish-Usecase-Checkers
|
https://api.github.com/repos/DMTF/Redfish-Usecase-Checkers
|
closed
|
Finalize Reset/Power Checker
|
high priority
|
Would like to finalize the checker for using ComputerSystem Reset. It should look something like this:
- Find a ComputerSystem to test
- Invoke a Reset action based on the allowable values for ResetType
Some considerations:
- Should multiple resets be performed? Loops through all possible ResetType options?
- Reset might result in a Task and needs to be managed.
- Even though the action may be successful, is that enough to really go by to show the requested reset was performed?
- Do we need something host side to verify the system went down, came back through BIOS, and booted to an OS?
|
1.0
|
Finalize Reset/Power Checker - Would like to finalize the checker for using ComputerSystem Reset. It should look something like this:
- Find a ComputerSystem to test
- Invoke a Reset action based on the allowable values for ResetType
Some considerations:
- Should multiple resets be performed? Loops through all possible ResetType options?
- Reset might result in a Task and needs to be managed.
- Even though the action may be successful, is that enough to really go by to show the requested reset was performed?
- Do we need something host side to verify the system went down, came back through BIOS, and booted to an OS?
|
priority
|
finalize reset power checker would like to finalize the checker for using computersystem reset it should look something like this find a computersystem to test invoke a reset action based on the allowable values for resettype some considerations should multiple resets be performed loops through all possible resettype options reset might result in a task and needs to be managed even though the action may be successful is that enough to really go by to show the requested reset was performed do we need something host side to verify the system went down came back through bios and booted to an os
| 1
|
220,267
| 7,354,781,480
|
IssuesEvent
|
2018-03-09 08:36:03
|
HackGT/ballot
|
https://api.github.com/repos/HackGT/ballot
|
closed
|
Create endpoints to retrieve score summaries of the expo
|
high priority
|
For MVP at least let's at the very least try to accomplish the same thing that Ramen 2.0 did.
For every criteria, take an average of every Judge's score and display the top n projects for each criteria within each category. Also show the overall top n for each category.
|
1.0
|
Create endpoints to retrieve score summaries of the expo - For MVP at least let's at the very least try to accomplish the same thing that Ramen 2.0 did.
For every criteria, take an average of every Judge's score and display the top n projects for each criteria within each category. Also show the overall top n for each category.
|
priority
|
create endpoints to retrieve score summaries of the expo for mvp at least let s at the very least try to accomplish the same thing that ramen did for every criteria take an average of every judge s score and display the top n projects for each criteria within each category also show the overall top n for each category
| 1
|
213,623
| 7,255,113,769
|
IssuesEvent
|
2018-02-16 13:48:23
|
canonical-websites/snapcraft.io
|
https://api.github.com/repos/canonical-websites/snapcraft.io
|
closed
|
The 'past year' option on the 'measure' should not be shown.
|
Priority: High Type: Enhancement
|
On the 'measure' page of the new publisher dashboard. there 'past year' drop-down option probably should not be shown, as we don't have close to one year's worth of data.
We have data from 2017-10-31.
I can think of a few strategies we could employ here:
1) We could change this to say "For all time", which would show data from 2017-10-31 until we have one year's worth of data, at which point we could change it back to say "Past year".
2) We could remove this option altogether, until we have 6 months of data, at which point we could add the '6 months' option. When we have one year of data we could add the 'one year' option.
|
1.0
|
The 'past year' option on the 'measure' should not be shown. - On the 'measure' page of the new publisher dashboard. there 'past year' drop-down option probably should not be shown, as we don't have close to one year's worth of data.
We have data from 2017-10-31.
I can think of a few strategies we could employ here:
1) We could change this to say "For all time", which would show data from 2017-10-31 until we have one year's worth of data, at which point we could change it back to say "Past year".
2) We could remove this option altogether, until we have 6 months of data, at which point we could add the '6 months' option. When we have one year of data we could add the 'one year' option.
|
priority
|
the past year option on the measure should not be shown on the measure page of the new publisher dashboard there past year drop down option probably should not be shown as we don t have close to one year s worth of data we have data from i can think of a few strategies we could employ here we could change this to say for all time which would show data from until we have one year s worth of data at which point we could change it back to say past year we could remove this option altogether until we have months of data at which point we could add the months option when we have one year of data we could add the one year option
| 1
|
500,048
| 14,485,242,465
|
IssuesEvent
|
2020-12-10 17:20:33
|
eclipse/lyo
|
https://api.github.com/repos/eclipse/lyo
|
closed
|
Deploy to Maven Central
|
Component: N/A (project-wide) Priority: High
|
### TODOs
- [x] Update metadata according to https://central.sonatype.org/pages/requirements.html
- [x] Set up GPG signing usign https://wiki.eclipse.org/Jenkins#How_can_artifacts_be_deployed_to_OSSRH_.2F_Maven_Central.3F
- [x] Make sure profile still allows GH Actions to work
- [x] Make sure snapshots are signed
- [x] Make sure snapshots are pushed to OSSRH
- [x] Write back after a success to https://bugs.eclipse.org/bugs/show_bug.cgi?id=569263 so that OSSRH flips the switch
### Links
https://bugs.eclipse.org/bugs/show_bug.cgi?id=569263 for tracking
https://wiki.eclipse.org/IT_Infrastructure_Doc#Publish_to_Maven_Central checklist
|
1.0
|
Deploy to Maven Central - ### TODOs
- [x] Update metadata according to https://central.sonatype.org/pages/requirements.html
- [x] Set up GPG signing usign https://wiki.eclipse.org/Jenkins#How_can_artifacts_be_deployed_to_OSSRH_.2F_Maven_Central.3F
- [x] Make sure profile still allows GH Actions to work
- [x] Make sure snapshots are signed
- [x] Make sure snapshots are pushed to OSSRH
- [x] Write back after a success to https://bugs.eclipse.org/bugs/show_bug.cgi?id=569263 so that OSSRH flips the switch
### Links
https://bugs.eclipse.org/bugs/show_bug.cgi?id=569263 for tracking
https://wiki.eclipse.org/IT_Infrastructure_Doc#Publish_to_Maven_Central checklist
|
priority
|
deploy to maven central todos update metadata according to set up gpg signing usign make sure profile still allows gh actions to work make sure snapshots are signed make sure snapshots are pushed to ossrh write back after a success to so that ossrh flips the switch links for tracking checklist
| 1
|
672,597
| 22,832,742,375
|
IssuesEvent
|
2022-07-12 14:12:35
|
Tech2You/Techtu-Website-Maintenance-Mode
|
https://api.github.com/repos/Tech2You/Techtu-Website-Maintenance-Mode
|
closed
|
Add Google ReCaptcha to the newsletter form
|
Priority: High ❗ Status: Confirmed ✔️ Type: Bug 🐞
|
We are going to add Google's ReCaptcha to our maintenance website, so we can combat bots from spamming our database with fake emails signing up for our newsletter.
|
1.0
|
Add Google ReCaptcha to the newsletter form - We are going to add Google's ReCaptcha to our maintenance website, so we can combat bots from spamming our database with fake emails signing up for our newsletter.
|
priority
|
add google recaptcha to the newsletter form we are going to add google s recaptcha to our maintenance website so we can combat bots from spamming our database with fake emails signing up for our newsletter
| 1
|
9,105
| 2,607,926,247
|
IssuesEvent
|
2015-02-26 00:24:45
|
chrsmithdemos/minify
|
https://api.github.com/repos/chrsmithdemos/minify
|
closed
|
Improve JSMin performance
|
auto-migrated Milestone-Release-1.1.0 Priority-High Type-Enhancement
|
```
The JSMin library is hideously slow. It probably needs quite a bit of
optimization before Minify will be usable for JavaScript minification on a
high-traffic website.
```
-----
Original issue reported on code.google.com by `rgr...@gmail.com` on 3 May 2007 at 5:54
|
1.0
|
Improve JSMin performance - ```
The JSMin library is hideously slow. It probably needs quite a bit of
optimization before Minify will be usable for JavaScript minification on a
high-traffic website.
```
-----
Original issue reported on code.google.com by `rgr...@gmail.com` on 3 May 2007 at 5:54
|
priority
|
improve jsmin performance the jsmin library is hideously slow it probably needs quite a bit of optimization before minify will be usable for javascript minification on a high traffic website original issue reported on code google com by rgr gmail com on may at
| 1
|
225,456
| 7,481,817,570
|
IssuesEvent
|
2018-04-04 22:01:40
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
USER ISSUE: Farming Weirdness
|
High Priority
|
**Version:** 0.7.1.2 beta
**Steps to Reproduce:**
Check the soil of any area that is next to eachother.
**Expected behavior:**
Similar areas right next to each other should have similar moisture and temperature. Moisture should be higher as you check nearer to water.
**Actual behavior:**
The values for temperature and moisture are very random with no sense. Farming is thus harder than it should be.
|
1.0
|
USER ISSUE: Farming Weirdness - **Version:** 0.7.1.2 beta
**Steps to Reproduce:**
Check the soil of any area that is next to eachother.
**Expected behavior:**
Similar areas right next to each other should have similar moisture and temperature. Moisture should be higher as you check nearer to water.
**Actual behavior:**
The values for temperature and moisture are very random with no sense. Farming is thus harder than it should be.
|
priority
|
user issue farming weirdness version beta steps to reproduce check the soil of any area that is next to eachother expected behavior similar areas right next to each other should have similar moisture and temperature moisture should be higher as you check nearer to water actual behavior the values for temperature and moisture are very random with no sense farming is thus harder than it should be
| 1
|
709,849
| 24,394,584,024
|
IssuesEvent
|
2022-10-04 18:03:43
|
Amulet-Team/Amulet-Map-Editor
|
https://api.github.com/repos/Amulet-Team/Amulet-Map-Editor
|
reopened
|
[Bug Report] Some of the Sealanterns and Concrete Powder Are NOT Converted
|
type: bug translator priority: high
|
## Bug Report
Some of the Sealanterns and Concrete Powder Are NOT Being Converted.
### Current Behaviour:
When converting worlds from bedrock edition to java edition, some of the sealanterns and concrete powder are not being converted.
### Expected behavior:
All the sealanterns and concrete powder should be converted.
### Steps To Reproduce:
1. Choose a bedrock world which contains many sealanterns / concrete powder
2. Convert it into a java world
3. Open the java world and some of the sealanterns / concrete powder disappear.
4. Check the console log you can see warning info for sealanterns / concrete powder.
### Environment:
- OS: Windows 11
- Minecraft Platform: Bedrock to Java
- Minecraft Version: BE1.19.22 to JE1.19
- Amulet Version: 0.10.1, 0.10.0, 0.9.*
### Screenshots


### Worlds
[Java World](https://github.com/Amulet-Team/Amulet-Map-Editor/files/9549976/City2.zip)
[Bedrock World](https://github.com/Amulet-Team/Amulet-Map-Editor/files/9609866/City2BE.zip)
|
1.0
|
[Bug Report] Some of the Sealanterns and Concrete Powder Are NOT Converted - ## Bug Report
Some of the Sealanterns and Concrete Powder Are NOT Being Converted.
### Current Behaviour:
When converting worlds from bedrock edition to java edition, some of the sealanterns and concrete powder are not being converted.
### Expected behavior:
All the sealanterns and concrete powder should be converted.
### Steps To Reproduce:
1. Choose a bedrock world which contains many sealanterns / concrete powder
2. Convert it into a java world
3. Open the java world and some of the sealanterns / concrete powder disappear.
4. Check the console log you can see warning info for sealanterns / concrete powder.
### Environment:
- OS: Windows 11
- Minecraft Platform: Bedrock to Java
- Minecraft Version: BE1.19.22 to JE1.19
- Amulet Version: 0.10.1, 0.10.0, 0.9.*
### Screenshots


### Worlds
[Java World](https://github.com/Amulet-Team/Amulet-Map-Editor/files/9549976/City2.zip)
[Bedrock World](https://github.com/Amulet-Team/Amulet-Map-Editor/files/9609866/City2BE.zip)
|
priority
|
some of the sealanterns and concrete powder are not converted bug report some of the sealanterns and concrete powder are not being converted current behaviour when converting worlds from bedrock edition to java edition some of the sealanterns and concrete powder are not being converted expected behavior all the sealanterns and concrete powder should be converted steps to reproduce choose a bedrock world which contains many sealanterns concrete powder convert it into a java world open the java world and some of the sealanterns concrete powder disappear check the console log you can see warning info for sealanterns concrete powder environment os windows minecraft platform bedrock to java minecraft version to amulet version screenshots worlds
| 1
|
484,793
| 13,957,601,185
|
IssuesEvent
|
2020-10-24 07:41:50
|
code4moldova/voluntar-web
|
https://api.github.com/repos/code4moldova/voluntar-web
|
closed
|
[Admin] Create expandable map component
|
Complexity: High 🧐 Complexity: Medium 🤔 Priority: High Scope: Components Type: Enhancement 🚀 hacktoberfest
|
We need a map component where we'll show current requests
Requests don't need to be showed right now, implement just map
See component bellow page filters
https://www.figma.com/file/oCN3NSECKQnS4PWhZnWGCT/Dashboard_Voluntar.md_V2.0?node-id=565%3A188
It should expand to component with a map
https://www.figma.com/file/oCN3NSECKQnS4PWhZnWGCT/Dashboard_Voluntar.md_V2.0?node-id=613%3A1023
There already exists a similar map component
```
src/app/shared/esri-map
```
Useful links
https://github.com/Esri/esri-loader
https://github.com/Esri/arcgis-js-api
|
1.0
|
[Admin] Create expandable map component - We need a map component where we'll show current requests
Requests don't need to be showed right now, implement just map
See component bellow page filters
https://www.figma.com/file/oCN3NSECKQnS4PWhZnWGCT/Dashboard_Voluntar.md_V2.0?node-id=565%3A188
It should expand to component with a map
https://www.figma.com/file/oCN3NSECKQnS4PWhZnWGCT/Dashboard_Voluntar.md_V2.0?node-id=613%3A1023
There already exists a similar map component
```
src/app/shared/esri-map
```
Useful links
https://github.com/Esri/esri-loader
https://github.com/Esri/arcgis-js-api
|
priority
|
create expandable map component we need a map component where we ll show current requests requests don t need to be showed right now implement just map see component bellow page filters it should expand to component with a map there already exists a similar map component src app shared esri map useful links
| 1
|
437,007
| 12,558,168,405
|
IssuesEvent
|
2020-06-07 15:07:40
|
bastienrobert/la-ferme
|
https://api.github.com/repos/bastienrobert/la-ferme
|
closed
|
En tant qu'utilisateur, je veux avoir accès à un contenu visuellement qualitatif et fonctionnel
|
enhancement package: components priority:high
|
**Description**
Ajout de components dans l'UI Kit :
- [x] Button
- [x] Typo
- [x] Icons
Ajout de components dans l'app :
- [ ] Walkthrough
- [ ] Round
- [ ] Call ( or Pickup? )
- [ ] Slider
- [ ] Timeline + Avatar
**Figma**
00_Elements_UI
**Screenshot**

|
1.0
|
En tant qu'utilisateur, je veux avoir accès à un contenu visuellement qualitatif et fonctionnel - **Description**
Ajout de components dans l'UI Kit :
- [x] Button
- [x] Typo
- [x] Icons
Ajout de components dans l'app :
- [ ] Walkthrough
- [ ] Round
- [ ] Call ( or Pickup? )
- [ ] Slider
- [ ] Timeline + Avatar
**Figma**
00_Elements_UI
**Screenshot**

|
priority
|
en tant qu utilisateur je veux avoir accès à un contenu visuellement qualitatif et fonctionnel description ajout de components dans l ui kit button typo icons ajout de components dans l app walkthrough round call or pickup slider timeline avatar figma elements ui screenshot
| 1
|
367,457
| 10,853,969,704
|
IssuesEvent
|
2019-11-13 15:38:02
|
Nyerca/PPS-18-cardbattle
|
https://api.github.com/repos/Nyerca/PPS-18-cardbattle
|
closed
|
Project Setup
|
Priority: High
|
- [x] Struttura progetto
- [x] Configurazione build sbt
- [x] Preparazione configurazione travis CI
- [x] Gitignore
|
1.0
|
Project Setup - - [x] Struttura progetto
- [x] Configurazione build sbt
- [x] Preparazione configurazione travis CI
- [x] Gitignore
|
priority
|
project setup struttura progetto configurazione build sbt preparazione configurazione travis ci gitignore
| 1
|
138,013
| 5,326,288,500
|
IssuesEvent
|
2017-02-15 03:27:51
|
google/error-prone
|
https://api.github.com/repos/google/error-prone
|
closed
|
Calling Map/Collection methods with arguments that are not compatible with type parameters
|
migrated Priority-High Status-Accepted Type-NewCheck
|
_[Original issue](https://code.google.com/p/error-prone/issues/detail?id=106) created by **fixpoint@google.com** on 2013-03-05 at 03:23 PM_
---
There are some methods on Map<K,V> or Collection<V> that accept Object because of compatibility reasons, while in fact they should accept <V>. We can check that arguments passed to those methods are compatible with V.
Example:
Map<MyProto, Integer> map = ...;
MyProto.Builder proto = MyProto.newBuilder()...;
if (map.get(proto)) { ... };
Following checks can be introduced for all "Map<K,V> map" and "T arg" variables:
1. map.containsKey(arg) --> check that either "T extends K" or "T super K"
2. map.containsValue(arg) --> check that either "T extends V" or "T super V"
3. map.get(arg) --> check that either "T extends K" or "T super K"
4. map.remove(arg) --> check that either "T extends K" or "T super K"
Following checks can be introduced for all "Collection<V> coll" and "T arg" variables:
1. coll.contains(arg) --> check that either "T extends V" or "T super V"
2. coll.remove(arg) --> check that either "T extends V" or "T super V"
Same for List.indexOf, List.lastIndexOf.
|
1.0
|
Calling Map/Collection methods with arguments that are not compatible with type parameters - _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=106) created by **fixpoint@google.com** on 2013-03-05 at 03:23 PM_
---
There are some methods on Map<K,V> or Collection<V> that accept Object because of compatibility reasons, while in fact they should accept <V>. We can check that arguments passed to those methods are compatible with V.
Example:
Map<MyProto, Integer> map = ...;
MyProto.Builder proto = MyProto.newBuilder()...;
if (map.get(proto)) { ... };
Following checks can be introduced for all "Map<K,V> map" and "T arg" variables:
1. map.containsKey(arg) --> check that either "T extends K" or "T super K"
2. map.containsValue(arg) --> check that either "T extends V" or "T super V"
3. map.get(arg) --> check that either "T extends K" or "T super K"
4. map.remove(arg) --> check that either "T extends K" or "T super K"
Following checks can be introduced for all "Collection<V> coll" and "T arg" variables:
1. coll.contains(arg) --> check that either "T extends V" or "T super V"
2. coll.remove(arg) --> check that either "T extends V" or "T super V"
Same for List.indexOf, List.lastIndexOf.
|
priority
|
calling map collection methods with arguments that are not compatible with type parameters created by fixpoint google com on at pm there are some methods on map lt k v gt or collection lt v gt that accept object because of compatibility reasons while in fact they should accept lt v gt we can check that arguments passed to those methods are compatible with v example map lt myproto integer gt map myproto builder proto myproto newbuilder if map get proto following checks can be introduced for all map lt k v gt map and t arg variables map containskey arg gt check that either t extends k or t super k map containsvalue arg gt check that either t extends v or t super v map get arg gt check that either t extends k or t super k map remove arg gt check that either t extends k or t super k following checks can be introduced for all collection lt v gt coll and t arg variables coll contains arg gt check that either t extends v or t super v coll remove arg gt check that either t extends v or t super v same for list indexof list lastindexof
| 1
|
77,945
| 3,507,901,252
|
IssuesEvent
|
2016-01-08 15:33:12
|
INN/Largo
|
https://api.github.com/repos/INN/Largo
|
opened
|
iOS Sticky Nav doesn't reappear on scroll up
|
priority: high type: bug
|
Discovered issue with sticky nav appear/disappear on my phone (iPhone 6S, iOS 9.2).
* Nav appears on page load and appropriately disappears on scroll
* On scroll up, nav reappears as expected
* However, after scrolling down, if you return to the top of the site the nav disappears and flickers.
Example on RNS

|
1.0
|
iOS Sticky Nav doesn't reappear on scroll up - Discovered issue with sticky nav appear/disappear on my phone (iPhone 6S, iOS 9.2).
* Nav appears on page load and appropriately disappears on scroll
* On scroll up, nav reappears as expected
* However, after scrolling down, if you return to the top of the site the nav disappears and flickers.
Example on RNS

|
priority
|
ios sticky nav doesn t reappear on scroll up discovered issue with sticky nav appear disappear on my phone iphone ios nav appears on page load and appropriately disappears on scroll on scroll up nav reappears as expected however after scrolling down if you return to the top of the site the nav disappears and flickers example on rns
| 1
|
638,991
| 20,744,116,152
|
IssuesEvent
|
2022-03-14 20:46:26
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] A copy of a page is not being indexed because the generated dates are invalid
|
bug priority: high CI
|
### Bug Report
#### Crafter CMS Version
3.1.18
#### Date of Build
N/A
#### Describe the bug
Generated dates, like createdDate and lastModifiedDate, are in the wrong format for a copy of a page and are causing indexing to fail.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial
2. Copy and paste and article from one folder to another (or in the same folder, the result is the same)
You should see indexing errors in the Deployer logs like the ones attached. If you inspect the XML of the page the `createdDate` and `lastModifiedDate` seem to be using the current timezone instead of UTC.
#### Logs
https://gist.github.com/avasquez614/36c7b5f57f954fde9265633d2967ea15
#### Screenshots
N/A
|
1.0
|
[studio] A copy of a page is not being indexed because the generated dates are invalid - ### Bug Report
#### Crafter CMS Version
3.1.18
#### Date of Build
N/A
#### Describe the bug
Generated dates, like createdDate and lastModifiedDate, are in the wrong format for a copy of a page and are causing indexing to fail.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial
2. Copy and paste and article from one folder to another (or in the same folder, the result is the same)
You should see indexing errors in the Deployer logs like the ones attached. If you inspect the XML of the page the `createdDate` and `lastModifiedDate` seem to be using the current timezone instead of UTC.
#### Logs
https://gist.github.com/avasquez614/36c7b5f57f954fde9265633d2967ea15
#### Screenshots
N/A
|
priority
|
a copy of a page is not being indexed because the generated dates are invalid bug report crafter cms version date of build n a describe the bug generated dates like createddate and lastmodifieddate are in the wrong format for a copy of a page and are causing indexing to fail to reproduce steps to reproduce the behavior create a site based on editorial copy and paste and article from one folder to another or in the same folder the result is the same you should see indexing errors in the deployer logs like the ones attached if you inspect the xml of the page the createddate and lastmodifieddate seem to be using the current timezone instead of utc logs screenshots n a
| 1
|
136,306
| 5,279,861,448
|
IssuesEvent
|
2017-02-07 12:37:24
|
BinPar/eBooks
|
https://api.github.com/repos/BinPar/eBooks
|
closed
|
1352016 Capítulo muestra ebooks en web
|
México Priority: High S3
|
Hola Silvia y Mara Alpuche de México nos envían esto:
"Te contacto por que he probado en varios equipos la opción de “muestra gratuita” en los ebooks y el botón me lleva al acceso de Bibliotecas. (instituciones).
Consulte el tema con José Galán y me recomendó probara en mi móvil con una red diferente y sólo así me permitió visualizar el registro al capítulo muestra.
¿será tema del tipo de red?"
Aquí tenéis una imagen y un vídeo.
Gracias
|
1.0
|
1352016 Capítulo muestra ebooks en web - Hola Silvia y Mara Alpuche de México nos envían esto:
"Te contacto por que he probado en varios equipos la opción de “muestra gratuita” en los ebooks y el botón me lleva al acceso de Bibliotecas. (instituciones).
Consulte el tema con José Galán y me recomendó probara en mi móvil con una red diferente y sólo así me permitió visualizar el registro al capítulo muestra.
¿será tema del tipo de red?"
Aquí tenéis una imagen y un vídeo.
Gracias
|
priority
|
capítulo muestra ebooks en web hola silvia y mara alpuche de méxico nos envían esto te contacto por que he probado en varios equipos la opción de “muestra gratuita” en los ebooks y el botón me lleva al acceso de bibliotecas instituciones consulte el tema con josé galán y me recomendó probara en mi móvil con una red diferente y sólo así me permitió visualizar el registro al capítulo muestra ¿será tema del tipo de red aquí tenéis una imagen y un vídeo gracias
| 1
|
526,366
| 15,287,172,790
|
IssuesEvent
|
2021-02-23 15:29:22
|
eventespresso/barista
|
https://api.github.com/repos/eventespresso/barista
|
closed
|
ES elements not shown
|
C: UI/UX 🚽 D: EDTR ✏️ D: Event Smart P2: HIGH priority 😮 T: bug 🐞
|
In #719, we added a replacement option for Add New date button, but the replacement is not shown when REM is inactive.
Reason: By the time Event Editor registers the add new date button, ES domain is not even initialized. Event editor registers the dynamic elements in sync, which happens before EDTR is rehydrated. ES depends upon rehydration to check the user capabilities (via `useConfig`). This creates a weird cycle.
|
1.0
|
ES elements not shown - In #719, we added a replacement option for Add New date button, but the replacement is not shown when REM is inactive.
Reason: By the time Event Editor registers the add new date button, ES domain is not even initialized. Event editor registers the dynamic elements in sync, which happens before EDTR is rehydrated. ES depends upon rehydration to check the user capabilities (via `useConfig`). This creates a weird cycle.
|
priority
|
es elements not shown in we added a replacement option for add new date button but the replacement is not shown when rem is inactive reason by the time event editor registers the add new date button es domain is not even initialized event editor registers the dynamic elements in sync which happens before edtr is rehydrated es depends upon rehydration to check the user capabilities via useconfig this creates a weird cycle
| 1
|
382,694
| 11,310,703,384
|
IssuesEvent
|
2020-01-19 21:18:50
|
AffiliateWP/affiliatewp-order-details-for-affiliates
|
https://api.github.com/repos/AffiliateWP/affiliatewp-order-details-for-affiliates
|
opened
|
Referrals with no context breaking output of page
|
priority-high type-bug
|
A customer's order details tab stopped showing the rest of the page after a specific referral. Most likely a fatal error of some kind.
Conversation: https://secure.helpscout.net/conversation/1026374428/139459/
I discovered there was a referral not showing on the page (the next referral _after_ the one that _was_ showing) which did not have a `context` set. This was likely manually added by the customer.
Adding a context to this referral (and another one further down the list) fixed the page and it then loaded correctly.
I haven't replicated the issue locally but I'm almost certain it's because the add-on relies heavily on a context being set for each referral. Since the referral did not have one, it caused a fatal error and broke the page.
|
1.0
|
Referrals with no context breaking output of page - A customer's order details tab stopped showing the rest of the page after a specific referral. Most likely a fatal error of some kind.
Conversation: https://secure.helpscout.net/conversation/1026374428/139459/
I discovered there was a referral not showing on the page (the next referral _after_ the one that _was_ showing) which did not have a `context` set. This was likely manually added by the customer.
Adding a context to this referral (and another one further down the list) fixed the page and it then loaded correctly.
I haven't replicated the issue locally but I'm almost certain it's because the add-on relies heavily on a context being set for each referral. Since the referral did not have one, it caused a fatal error and broke the page.
|
priority
|
referrals with no context breaking output of page a customer s order details tab stopped showing the rest of the page after a specific referral most likely a fatal error of some kind conversation i discovered there was a referral not showing on the page the next referral after the one that was showing which did not have a context set this was likely manually added by the customer adding a context to this referral and another one further down the list fixed the page and it then loaded correctly i haven t replicated the issue locally but i m almost certain it s because the add on relies heavily on a context being set for each referral since the referral did not have one it caused a fatal error and broke the page
| 1
|
401,351
| 11,789,112,961
|
IssuesEvent
|
2020-03-17 16:37:12
|
python-discord/bot
|
https://api.github.com/repos/python-discord/bot
|
closed
|
Strip spoiler tags for watchlist triggers
|
area: filters priority: 1 - high status: WIP type: bug
|
With the [recent Spoiler Tag addition](https://support.discordapp.com/hc/en-us/articles/360022320632-Spoiler-Tags-) to Discord, it's now possible to take memes to a whole new level. While the message filters are still operable, it's difficult to read the trigger messages in the mod log, particularly on mobile.
Spoilers are wrapped with `||` (e.g. `||text||`). Let's add a helper method to the modlog cog to strip these from the message content before they are sent.
|
1.0
|
Strip spoiler tags for watchlist triggers - With the [recent Spoiler Tag addition](https://support.discordapp.com/hc/en-us/articles/360022320632-Spoiler-Tags-) to Discord, it's now possible to take memes to a whole new level. While the message filters are still operable, it's difficult to read the trigger messages in the mod log, particularly on mobile.
Spoilers are wrapped with `||` (e.g. `||text||`). Let's add a helper method to the modlog cog to strip these from the message content before they are sent.
|
priority
|
strip spoiler tags for watchlist triggers with the to discord it s now possible to take memes to a whole new level while the message filters are still operable it s difficult to read the trigger messages in the mod log particularly on mobile spoilers are wrapped with e g text let s add a helper method to the modlog cog to strip these from the message content before they are sent
| 1
|
824,987
| 31,238,328,429
|
IssuesEvent
|
2023-08-20 14:51:01
|
softwareantics/FinalEngine
|
https://api.github.com/repos/softwareantics/FinalEngine
|
closed
|
✨ Implement Component Property Management Logic in Entity Inspector
|
✨ Feature 🔴 High Priority area-ecs area-editor
|
### Checklist
- [X] I have not removed the ✨ emoji from the title.
- [X] I have searched to ensure that no existing issue covers this feature request.
- [X] For maintainers: I have updated the projects and milestones if needed.
### Description
I propose we design a way to show a components within an `Entity` in the `EntityInspectorView`.
### Justification
This is required to ensure that can later setup adding, editing and removing components from an entity.
### Implementation Approach
1. **Create `PropertyStringViewModel`:** Develop a view model that accepts a `ref` parameter of type `string` and introduces a `Name` property. This specialized view model empowers users to modify a `string` property associated with a component. The `Name` property corresponds to the name of the referenced `string` property passed into the view model.
2. **Create `EntityComponentViewModel`:** Construct a view model designed to iterate through the properties of a component. Inside this view model, generate a collection of sub-view models, with each sub-view model representing a specific property of the component. Implement specialized property view models like `PropertyStringViewModel`, `PropertyFloatViewModel`, `PropertyIntViewModel`, etc., to effectively handle various property types. Additionally, introduce a `Name` property in the `EntityComponentViewModel` to provide insight into the data type of the underlying component (e.g., `TagComponent`).
3. **Update `EntityInspectorViewModel`:** Elevate the capabilities of this view model by integrating a collection of `EntityComponentViewModel` instances. During the view model's creation, perform an iteration through all the components within the designated `Entity`. This iteration process generates an `EntityComponentViewModel` for each component present. This approach ensures that each property of a component is aptly represented through its corresponding sub-view model within the encompassing `EntityComponentViewModel`.
4. **Extend Property View Models (Optional):** In a similar fashion to the `PropertyStringViewModel`, consider expanding the repertoire of property-specific view models. For various property types, such as `float`, `int`, `Vector2`, and others, craft additional specialized view models to handle their distinct characteristics. This optional step allows for a comprehensive coverage of property types within the overall implementation. Also, check whether or not proper use of generics will minimize how many view models we'll need to create because the logic will likely all remain the same and just depend on the view.
### Requirements
_No response_
### Potential Challenges
1. **Error Handling:** Managing potential errors during property modification, such as invalid input or failed validation, requires careful consideration to provide meaningful feedback to the user without disrupting the overall application flow.
2. **Nested Properties:** If the components themselves contain nested properties or sub-components, designing a scalable approach to handle these nested properties and maintain a user-friendly interface can be challenging.
3. **Complexity of Property Types:** Handling various property types, such as strings, floats, ints, and more, within the EntityComponentViewModel and their respective specialized property view models may introduce complexity in terms of data conversion, validation, and user interaction.
### Additional Context
_No response_
|
1.0
|
✨ Implement Component Property Management Logic in Entity Inspector - ### Checklist
- [X] I have not removed the ✨ emoji from the title.
- [X] I have searched to ensure that no existing issue covers this feature request.
- [X] For maintainers: I have updated the projects and milestones if needed.
### Description
I propose we design a way to show a components within an `Entity` in the `EntityInspectorView`.
### Justification
This is required to ensure that can later setup adding, editing and removing components from an entity.
### Implementation Approach
1. **Create `PropertyStringViewModel`:** Develop a view model that accepts a `ref` parameter of type `string` and introduces a `Name` property. This specialized view model empowers users to modify a `string` property associated with a component. The `Name` property corresponds to the name of the referenced `string` property passed into the view model.
2. **Create `EntityComponentViewModel`:** Construct a view model designed to iterate through the properties of a component. Inside this view model, generate a collection of sub-view models, with each sub-view model representing a specific property of the component. Implement specialized property view models like `PropertyStringViewModel`, `PropertyFloatViewModel`, `PropertyIntViewModel`, etc., to effectively handle various property types. Additionally, introduce a `Name` property in the `EntityComponentViewModel` to provide insight into the data type of the underlying component (e.g., `TagComponent`).
3. **Update `EntityInspectorViewModel`:** Elevate the capabilities of this view model by integrating a collection of `EntityComponentViewModel` instances. During the view model's creation, perform an iteration through all the components within the designated `Entity`. This iteration process generates an `EntityComponentViewModel` for each component present. This approach ensures that each property of a component is aptly represented through its corresponding sub-view model within the encompassing `EntityComponentViewModel`.
4. **Extend Property View Models (Optional):** In a similar fashion to the `PropertyStringViewModel`, consider expanding the repertoire of property-specific view models. For various property types, such as `float`, `int`, `Vector2`, and others, craft additional specialized view models to handle their distinct characteristics. This optional step allows for a comprehensive coverage of property types within the overall implementation. Also, check whether or not proper use of generics will minimize how many view models we'll need to create because the logic will likely all remain the same and just depend on the view.
### Requirements
_No response_
### Potential Challenges
1. **Error Handling:** Managing potential errors during property modification, such as invalid input or failed validation, requires careful consideration to provide meaningful feedback to the user without disrupting the overall application flow.
2. **Nested Properties:** If the components themselves contain nested properties or sub-components, designing a scalable approach to handle these nested properties and maintain a user-friendly interface can be challenging.
3. **Complexity of Property Types:** Handling various property types, such as strings, floats, ints, and more, within the EntityComponentViewModel and their respective specialized property view models may introduce complexity in terms of data conversion, validation, and user interaction.
### Additional Context
_No response_
|
priority
|
✨ implement component property management logic in entity inspector checklist i have not removed the ✨ emoji from the title i have searched to ensure that no existing issue covers this feature request for maintainers i have updated the projects and milestones if needed description i propose we design a way to show a components within an entity in the entityinspectorview justification this is required to ensure that can later setup adding editing and removing components from an entity implementation approach create propertystringviewmodel develop a view model that accepts a ref parameter of type string and introduces a name property this specialized view model empowers users to modify a string property associated with a component the name property corresponds to the name of the referenced string property passed into the view model create entitycomponentviewmodel construct a view model designed to iterate through the properties of a component inside this view model generate a collection of sub view models with each sub view model representing a specific property of the component implement specialized property view models like propertystringviewmodel propertyfloatviewmodel propertyintviewmodel etc to effectively handle various property types additionally introduce a name property in the entitycomponentviewmodel to provide insight into the data type of the underlying component e g tagcomponent update entityinspectorviewmodel elevate the capabilities of this view model by integrating a collection of entitycomponentviewmodel instances during the view model s creation perform an iteration through all the components within the designated entity this iteration process generates an entitycomponentviewmodel for each component present this approach ensures that each property of a component is aptly represented through its corresponding sub view model within the encompassing entitycomponentviewmodel extend property view models optional in a similar fashion to the propertystringviewmodel consider expanding the repertoire of property specific view models for various property types such as float int and others craft additional specialized view models to handle their distinct characteristics this optional step allows for a comprehensive coverage of property types within the overall implementation also check whether or not proper use of generics will minimize how many view models we ll need to create because the logic will likely all remain the same and just depend on the view requirements no response potential challenges error handling managing potential errors during property modification such as invalid input or failed validation requires careful consideration to provide meaningful feedback to the user without disrupting the overall application flow nested properties if the components themselves contain nested properties or sub components designing a scalable approach to handle these nested properties and maintain a user friendly interface can be challenging complexity of property types handling various property types such as strings floats ints and more within the entitycomponentviewmodel and their respective specialized property view models may introduce complexity in terms of data conversion validation and user interaction additional context no response
| 1
|
745,148
| 25,972,235,422
|
IssuesEvent
|
2022-12-19 12:12:47
|
KinsonDigital/CASL
|
https://api.github.com/repos/KinsonDigital/CASL
|
opened
|
🚧Update build system to CICD
|
workflow high priority preview
|
### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Update the build system to use [CICD](https://github.com/KinsonDigital/CICD).
This will require removing all of the current workflows and using the workflows that come with CICD.
Update CICD to the latest version as of the implementation of this issue.
### Acceptance Criteria
- [ ] _**CICD**_ dotnet tool added to the solution
- [ ] Workflows replaced/updated.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
|
1.0
|
🚧Update build system to CICD - ### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Update the build system to use [CICD](https://github.com/KinsonDigital/CICD).
This will require removing all of the current workflows and using the workflows that come with CICD.
Update CICD to the latest version as of the implementation of this issue.
### Acceptance Criteria
- [ ] _**CICD**_ dotnet tool added to the solution
- [ ] Workflows replaced/updated.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
|
priority
|
🚧update build system to cicd complete the item below i have updated the title without removing the 🚧 emoji description update the build system to use this will require removing all of the current workflows and using the workflows that come with cicd update cicd to the latest version as of the implementation of this issue acceptance criteria cicd dotnet tool added to the solution workflows replaced updated todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if applicable issue linked to the correct milestone if applicable draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes new feature ✨new feature workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct
| 1
|
159,603
| 6,049,430,391
|
IssuesEvent
|
2017-06-12 18:44:35
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[craftercms] Delivery specific resources are being modified incorrectly by Gradle
|
bug Priority: High
|
Added resources/tomcat-config-delivery for specific Delivery config files. When running `./gradlew build deploy` Gradle is modifying the files incorrectly, specifically removing comments and in the case of XMLs removing some of the <> tags, which leaves the configuration incorrect.
|
1.0
|
[craftercms] Delivery specific resources are being modified incorrectly by Gradle - Added resources/tomcat-config-delivery for specific Delivery config files. When running `./gradlew build deploy` Gradle is modifying the files incorrectly, specifically removing comments and in the case of XMLs removing some of the <> tags, which leaves the configuration incorrect.
|
priority
|
delivery specific resources are being modified incorrectly by gradle added resources tomcat config delivery for specific delivery config files when running gradlew build deploy gradle is modifying the files incorrectly specifically removing comments and in the case of xmls removing some of the tags which leaves the configuration incorrect
| 1
|
622,294
| 19,620,239,308
|
IssuesEvent
|
2022-01-07 04:57:58
|
merico-dev/lake
|
https://api.github.com/repos/merico-dev/lake
|
closed
|
`feat` Config UI - Pipelines : All Pipeline Runs (Manage Pipelines)
|
proposal priority: high Frontend
|
## Config UI / Pipelines / All Pipeline Runs (Manage Pipelines)
> Manage Job Activity for all your pipeline runs.
|
1.0
|
`feat` Config UI - Pipelines : All Pipeline Runs (Manage Pipelines) - ## Config UI / Pipelines / All Pipeline Runs (Manage Pipelines)
> Manage Job Activity for all your pipeline runs.
|
priority
|
feat config ui pipelines all pipeline runs manage pipelines config ui pipelines all pipeline runs manage pipelines manage job activity for all your pipeline runs
| 1
|
732,324
| 25,255,005,005
|
IssuesEvent
|
2022-11-15 17:19:15
|
datatlas-erasme/front
|
https://api.github.com/repos/datatlas-erasme/front
|
opened
|
Add search bar and data filter
|
enhancement styling priority high industry responsive
|
- On click open about panel and open poi
- Keep in mind the possibility that later the burger icon and the "search and data filter" section will be two separate buttons


|
1.0
|
Add search bar and data filter - - On click open about panel and open poi
- Keep in mind the possibility that later the burger icon and the "search and data filter" section will be two separate buttons


|
priority
|
add search bar and data filter on click open about panel and open poi keep in mind the possibility that later the burger icon and the search and data filter section will be two separate buttons
| 1
|
247,704
| 7,922,577,368
|
IssuesEvent
|
2018-07-05 11:19:28
|
Icinga/icingaweb2
|
https://api.github.com/repos/Icinga/icingaweb2
|
closed
|
Request::getPost() parses JSON without respecting Content-Type
|
bug framework high-priority
|
With a444b8adf5491d45f7ee7aff9259c4618308f140 `Request::getPost()` parses the POST body as JSON if header `Accept:application/json` is set.
However, `Accept` does not mean what is being sent so any other content type in the POST body is rejected with a syntax error.
Putting such a fundamental presumption in a base class is bad and we should only parse JSON unconditionally in concrete implementations. If this should be kept in the base class, it should only run if the POST body is actually JSON.
|
1.0
|
Request::getPost() parses JSON without respecting Content-Type - With a444b8adf5491d45f7ee7aff9259c4618308f140 `Request::getPost()` parses the POST body as JSON if header `Accept:application/json` is set.
However, `Accept` does not mean what is being sent so any other content type in the POST body is rejected with a syntax error.
Putting such a fundamental presumption in a base class is bad and we should only parse JSON unconditionally in concrete implementations. If this should be kept in the base class, it should only run if the POST body is actually JSON.
|
priority
|
request getpost parses json without respecting content type with request getpost parses the post body as json if header accept application json is set however accept does not mean what is being sent so any other content type in the post body is rejected with a syntax error putting such a fundamental presumption in a base class is bad and we should only parse json unconditionally in concrete implementations if this should be kept in the base class it should only run if the post body is actually json
| 1
|
185,807
| 6,730,688,302
|
IssuesEvent
|
2017-10-18 02:39:43
|
Jguer/yay
|
https://api.github.com/repos/Jguer/yay
|
closed
|
Yay does not respect 'ignore' list set in pacman.conf
|
High Priority
|
Yay should parse and respect 'ignore' lists in pacman.conf - users put those package names in there for a reason, so yay should not try to update something that the user clearly does not want to have updated.
|
1.0
|
Yay does not respect 'ignore' list set in pacman.conf - Yay should parse and respect 'ignore' lists in pacman.conf - users put those package names in there for a reason, so yay should not try to update something that the user clearly does not want to have updated.
|
priority
|
yay does not respect ignore list set in pacman conf yay should parse and respect ignore lists in pacman conf users put those package names in there for a reason so yay should not try to update something that the user clearly does not want to have updated
| 1
|
145,222
| 5,560,819,065
|
IssuesEvent
|
2017-03-24 20:34:55
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
opened
|
[studio] Create Site API is not conformant with spec
|
bug Priority: High
|
Please bring up to spec: http://docs.craftercms.org/en/latest/developers/projects/studio/api/site/create-site.html
To trigger 500: kill Solr and create a site.
|
1.0
|
[studio] Create Site API is not conformant with spec - Please bring up to spec: http://docs.craftercms.org/en/latest/developers/projects/studio/api/site/create-site.html
To trigger 500: kill Solr and create a site.
|
priority
|
create site api is not conformant with spec please bring up to spec to trigger kill solr and create a site
| 1
|
402,788
| 11,824,875,204
|
IssuesEvent
|
2020-03-21 09:25:48
|
coronasafe/care
|
https://api.github.com/repos/coronasafe/care
|
closed
|
UI Changes
|
High Priority
|
# Home Page
- [x] Rename hosptial `SIGN UP` to `Hospital Administrator Signup`
# Staff Sign Up form
- [x] Rename `Phone Number` to `10 digit mobile number`
- [x] Replace submit button text with `Hosital Administrator Signup`
# Facility Register form (facility/create/)
- [x] Label - name should be changed to `Name of Hospital`
- [x] Label - address should be changed to `Enter Hospital Address`
- [x] Submit button text - `Click to Submit Hospital Details`
# Capacity page
- [x] Add title `Enter Your Hospital Capacity`
- [x] Add the following description
```
The Chief Ministers's office request to know imformation about total capacity and current Utlisation of Normal Beds, ICU Beds and Ventilators in your Hospital
1. Normal Beds
2. ICU Beds
3. Ventilators
```
- [x] Rename room type to `Bed Type`
- [x] Rename capacity to `Total Capacity`
- [x] Rename current_capacity tp `Current Capacity Utilisation`
- [x] Replace Button text ` Save add more Bed Types`
# Doctors Page (/doctorcount/add/)
- [x] Title - `Add the number of doctors in different specialization`
- [x] Seed default values for doctor specialization, Use the folllowing
```
General Medicine, Pulmonology, Critical Care, Paediatrics
```
# Facility show
- [x] Rename `update` button text to `Edit`
# Facility Index
- [x] Add title - The Chief Ministers's office request to know *Live Data*
- [x] List the facility as a big card as its going to be mostly one facilty per user.
|
1.0
|
UI Changes - # Home Page
- [x] Rename hosptial `SIGN UP` to `Hospital Administrator Signup`
# Staff Sign Up form
- [x] Rename `Phone Number` to `10 digit mobile number`
- [x] Replace submit button text with `Hosital Administrator Signup`
# Facility Register form (facility/create/)
- [x] Label - name should be changed to `Name of Hospital`
- [x] Label - address should be changed to `Enter Hospital Address`
- [x] Submit button text - `Click to Submit Hospital Details`
# Capacity page
- [x] Add title `Enter Your Hospital Capacity`
- [x] Add the following description
```
The Chief Ministers's office request to know imformation about total capacity and current Utlisation of Normal Beds, ICU Beds and Ventilators in your Hospital
1. Normal Beds
2. ICU Beds
3. Ventilators
```
- [x] Rename room type to `Bed Type`
- [x] Rename capacity to `Total Capacity`
- [x] Rename current_capacity tp `Current Capacity Utilisation`
- [x] Replace Button text ` Save add more Bed Types`
# Doctors Page (/doctorcount/add/)
- [x] Title - `Add the number of doctors in different specialization`
- [x] Seed default values for doctor specialization, Use the folllowing
```
General Medicine, Pulmonology, Critical Care, Paediatrics
```
# Facility show
- [x] Rename `update` button text to `Edit`
# Facility Index
- [x] Add title - The Chief Ministers's office request to know *Live Data*
- [x] List the facility as a big card as its going to be mostly one facilty per user.
|
priority
|
ui changes home page rename hosptial sign up to hospital administrator signup staff sign up form rename phone number to digit mobile number replace submit button text with hosital administrator signup facility register form facility create label name should be changed to name of hospital label address should be changed to enter hospital address submit button text click to submit hospital details capacity page add title enter your hospital capacity add the following description the chief ministers s office request to know imformation about total capacity and current utlisation of normal beds icu beds and ventilators in your hospital normal beds icu beds ventilators rename room type to bed type rename capacity to total capacity rename current capacity tp current capacity utilisation replace button text save add more bed types doctors page doctorcount add title add the number of doctors in different specialization seed default values for doctor specialization use the folllowing general medicine pulmonology critical care paediatrics facility show rename update button text to edit facility index add title the chief ministers s office request to know live data list the facility as a big card as its going to be mostly one facilty per user
| 1
|
770,584
| 27,046,467,964
|
IssuesEvent
|
2023-02-13 10:07:44
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[Bug]: Incorrect completion items provided in statement context
|
Type/Bug Priority/High Team/LanguageServer Points/3 Area/Completion
|
### Description
Consider following [completion/statement_context/config/assignment_stmt_ctx_config9.json]
As suggested by the comment https://github.com/ballerina-platform/ballerina-lang/pull/38596#discussion_r1086425859 correct completion item should be provided.
### Steps to Reproduce
_No response_
### Affected Version(s)
_No response_
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
1.0
|
[Bug]: Incorrect completion items provided in statement context - ### Description
Consider following [completion/statement_context/config/assignment_stmt_ctx_config9.json]
As suggested by the comment https://github.com/ballerina-platform/ballerina-lang/pull/38596#discussion_r1086425859 correct completion item should be provided.
### Steps to Reproduce
_No response_
### Affected Version(s)
_No response_
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
priority
|
incorrect completion items provided in statement context description consider following as suggested by the comment correct completion item should be provided steps to reproduce no response affected version s no response os db other environment details and versions no response related area compilation related issue s optional no response suggested label s optional no response suggested assignee s optional no response
| 1
|
808,537
| 30,086,464,208
|
IssuesEvent
|
2023-06-29 09:00:26
|
openwashdata/book
|
https://api.github.com/repos/openwashdata/book
|
closed
|
Prepare openwashdata R package with functions that support a specific R data package workflow
|
priority: high
|
- [x] repo
- [x] readme
- [x] R folder
- [x] scripts folder
|
1.0
|
Prepare openwashdata R package with functions that support a specific R data package workflow - - [x] repo
- [x] readme
- [x] R folder
- [x] scripts folder
|
priority
|
prepare openwashdata r package with functions that support a specific r data package workflow repo readme r folder scripts folder
| 1
|
424,750
| 12,322,847,292
|
IssuesEvent
|
2020-05-13 11:05:06
|
wso2/docs-is
|
https://api.github.com/repos/wso2/docs-is
|
opened
|
Switching between associated user accounts
|
Priority/High enhancement
|
**Description:**
[User account association](https://is.docs.wso2.com/en/next/learn/associating-user-accounts/) can be managed using the [Association REST APIs](https://is.docs.wso2.com/en/next/develop/association-rest-api/) in the Identity Server. The server also allows switching between associated accounts using a token obtained via OIDC flow, in a grant type called `account_switch`.
The latter is done with an API call as mentioned in [this pull request description](https://github.com/wso2-extensions/identity-user-account-association/pull/30). Once the token is obtained for the associated user, the relying party can now act on behalf of the associated user.
Association APIs and the `account_switch` grant type can be utilized in a way that an application can have the capability of switching between associated users. This can be done as explained in the following example.
There is an application called `pickup-dispatcher` which uses WSO2 Identity Server as its authorization server. A user named `John` logs in to this application. Besides, `John` has another account in the Identity Server named as `Smith`, and he has associated both `John` and `Smith` user accounts via the Identity Server's `user-portal` beforehand.
Now he wants to switch to his associated user account `Smith` in the `pickup-dispatcher`, but without logging in again.
1. `pickup-dispatcher` then invoke account association APIs on-behalf of the `John` to get his associated user accounts, and provide that to the user `John`.
2. `John` selects the account `Smith`.
3. `pickup-dispatcher` calls the Identity Server to obtain an access token for the account `Smith` via the `account_switch` grant type, with the already available active access token for the account `John`.
4. The server validates and returns an access token which has the user `Smith` as it's authorized user.
**We need to add the above content with the mentioned scenario as a sample, to the location: [https://is.docs.wso2.com/en/next/learn/associating-user-accounts/](https://is.docs.wso2.com/en/next/learn/associating-user-accounts/).**
|
1.0
|
Switching between associated user accounts - **Description:**
[User account association](https://is.docs.wso2.com/en/next/learn/associating-user-accounts/) can be managed using the [Association REST APIs](https://is.docs.wso2.com/en/next/develop/association-rest-api/) in the Identity Server. The server also allows switching between associated accounts using a token obtained via OIDC flow, in a grant type called `account_switch`.
The latter is done with an API call as mentioned in [this pull request description](https://github.com/wso2-extensions/identity-user-account-association/pull/30). Once the token is obtained for the associated user, the relying party can now act on behalf of the associated user.
Association APIs and the `account_switch` grant type can be utilized in a way that an application can have the capability of switching between associated users. This can be done as explained in the following example.
There is an application called `pickup-dispatcher` which uses WSO2 Identity Server as its authorization server. A user named `John` logs in to this application. Besides, `John` has another account in the Identity Server named as `Smith`, and he has associated both `John` and `Smith` user accounts via the Identity Server's `user-portal` beforehand.
Now he wants to switch to his associated user account `Smith` in the `pickup-dispatcher`, but without logging in again.
1. `pickup-dispatcher` then invoke account association APIs on-behalf of the `John` to get his associated user accounts, and provide that to the user `John`.
2. `John` selects the account `Smith`.
3. `pickup-dispatcher` calls the Identity Server to obtain an access token for the account `Smith` via the `account_switch` grant type, with the already available active access token for the account `John`.
4. The server validates and returns an access token which has the user `Smith` as it's authorized user.
**We need to add the above content with the mentioned scenario as a sample, to the location: [https://is.docs.wso2.com/en/next/learn/associating-user-accounts/](https://is.docs.wso2.com/en/next/learn/associating-user-accounts/).**
|
priority
|
switching between associated user accounts description can be managed using the in the identity server the server also allows switching between associated accounts using a token obtained via oidc flow in a grant type called account switch the latter is done with an api call as mentioned in once the token is obtained for the associated user the relying party can now act on behalf of the associated user association apis and the account switch grant type can be utilized in a way that an application can have the capability of switching between associated users this can be done as explained in the following example there is an application called pickup dispatcher which uses identity server as its authorization server a user named john logs in to this application besides john has another account in the identity server named as smith and he has associated both john and smith user accounts via the identity server s user portal beforehand now he wants to switch to his associated user account smith in the pickup dispatcher but without logging in again pickup dispatcher then invoke account association apis on behalf of the john to get his associated user accounts and provide that to the user john john selects the account smith pickup dispatcher calls the identity server to obtain an access token for the account smith via the account switch grant type with the already available active access token for the account john the server validates and returns an access token which has the user smith as it s authorized user we need to add the above content with the mentioned scenario as a sample to the location
| 1
|
175,494
| 6,551,489,022
|
IssuesEvent
|
2017-09-05 14:55:08
|
htmlacademy/yomoyo
|
https://api.github.com/repos/htmlacademy/yomoyo
|
closed
|
Дополнительный критерий "Оптимизированный код"
|
category: intensive level: high-priority type: enhancement
|
При проверке проекта обнаружил, что студент очень часто в условиях копипастит код, вот что у него было:
```javascript
if (this.level >= 10) {
this.timer.stopTimer();
this.timer.stopTimeout();
this.setResult();
removeTimer();
} else if (this.lives <= 0) {
this.timer.stopTimer();
this.timer.stopTimeout();
app.showResultFail();
removeTimer();
}
```
в комментариях к критерию Д8 я написал ему об этом, потом что такого кода на проекте было много. И предложил вариант с оптимизированным кодом
```javascript
this.timer.stopTimer();
this.timer.stopTimeout();
removeTimer();
if (this.level >= 10) {
this.setResult();
} else if (this.lives <= 0) {
app.showResultFail();
}
```
на что тут же получил сообщение от куратора, о том что данное замечание не имеет ни какого отношение к данному критерию, и я должен поставить галочку, что критерий соблюден.
Оно и правильно, но где то мне написать об этом хотелось.
Предлагаю ввести новый критерий `Д9. Оптимизированный код`, чтобы студенты учились писать более читаемый код.
**PS** Сюда же предлагаю внести запрет на использование `magic number`, студент прям захардкодил число 10 (кол-во игр) вместо того чтобы использовать поле `length` массива, а если сервер вернет 9 игр, то приложение у него никогда не закончится.
**PSS** И ладно бы в константу и использовал ее везде, но она у него была размазана по кодовой базе.
|
1.0
|
Дополнительный критерий "Оптимизированный код" - При проверке проекта обнаружил, что студент очень часто в условиях копипастит код, вот что у него было:
```javascript
if (this.level >= 10) {
this.timer.stopTimer();
this.timer.stopTimeout();
this.setResult();
removeTimer();
} else if (this.lives <= 0) {
this.timer.stopTimer();
this.timer.stopTimeout();
app.showResultFail();
removeTimer();
}
```
в комментариях к критерию Д8 я написал ему об этом, потом что такого кода на проекте было много. И предложил вариант с оптимизированным кодом
```javascript
this.timer.stopTimer();
this.timer.stopTimeout();
removeTimer();
if (this.level >= 10) {
this.setResult();
} else if (this.lives <= 0) {
app.showResultFail();
}
```
на что тут же получил сообщение от куратора, о том что данное замечание не имеет ни какого отношение к данному критерию, и я должен поставить галочку, что критерий соблюден.
Оно и правильно, но где то мне написать об этом хотелось.
Предлагаю ввести новый критерий `Д9. Оптимизированный код`, чтобы студенты учились писать более читаемый код.
**PS** Сюда же предлагаю внести запрет на использование `magic number`, студент прям захардкодил число 10 (кол-во игр) вместо того чтобы использовать поле `length` массива, а если сервер вернет 9 игр, то приложение у него никогда не закончится.
**PSS** И ладно бы в константу и использовал ее везде, но она у него была размазана по кодовой базе.
|
priority
|
дополнительный критерий оптимизированный код при проверке проекта обнаружил что студент очень часто в условиях копипастит код вот что у него было javascript if this level this timer stoptimer this timer stoptimeout this setresult removetimer else if this lives this timer stoptimer this timer stoptimeout app showresultfail removetimer в комментариях к критерию я написал ему об этом потом что такого кода на проекте было много и предложил вариант с оптимизированным кодом javascript this timer stoptimer this timer stoptimeout removetimer if this level this setresult else if this lives app showresultfail на что тут же получил сообщение от куратора о том что данное замечание не имеет ни какого отношение к данному критерию и я должен поставить галочку что критерий соблюден оно и правильно но где то мне написать об этом хотелось предлагаю ввести новый критерий оптимизированный код чтобы студенты учились писать более читаемый код ps сюда же предлагаю внести запрет на использование magic number студент прям захардкодил число кол во игр вместо того чтобы использовать поле length массива а если сервер вернет игр то приложение у него никогда не закончится pss и ладно бы в константу и использовал ее везде но она у него была размазана по кодовой базе
| 1
|
1,878
| 2,521,008,955
|
IssuesEvent
|
2015-01-19 10:48:08
|
unikent/of-course
|
https://api.github.com/repos/unikent/of-course
|
closed
|
Urgent: Modules not outputting
|
Priority:high Story - In Progress
|
@msf4-unikent @eg270-unikent @lrhm-unikent
Hi All, there seems to be a problem with the v-pos modules not outputting on the UG course pages. See for example:
http://www.kent.ac.uk/courses/undergraduate/75/biological-anthropology
Please could this be looked at?
Thanks
Angela
|
1.0
|
Urgent: Modules not outputting - @msf4-unikent @eg270-unikent @lrhm-unikent
Hi All, there seems to be a problem with the v-pos modules not outputting on the UG course pages. See for example:
http://www.kent.ac.uk/courses/undergraduate/75/biological-anthropology
Please could this be looked at?
Thanks
Angela
|
priority
|
urgent modules not outputting unikent unikent lrhm unikent hi all there seems to be a problem with the v pos modules not outputting on the ug course pages see for example please could this be looked at thanks angela
| 1
|
168,958
| 6,392,642,360
|
IssuesEvent
|
2017-08-04 03:39:21
|
hurtom/toloka
|
https://api.github.com/repos/hurtom/toloka
|
closed
|
Міграція bb_forum_prune
|
db high priority needs review question
|
Стосується #7/#46 - в старій версії таблиця `bb_forum_prune` є, в новій немає
|
1.0
|
Міграція bb_forum_prune - Стосується #7/#46 - в старій версії таблиця `bb_forum_prune` є, в новій немає
|
priority
|
міграція bb forum prune стосується в старій версії таблиця bb forum prune є в новій немає
| 1
|
623,767
| 19,678,457,545
|
IssuesEvent
|
2022-01-11 14:39:56
|
airbytehq/airbyte
|
https://api.github.com/repos/airbytehq/airbyte
|
closed
|
Performance issues in new Mongo Source Connector
|
type/bug area/connectors priority/high
|
## Enviroment
- **Airbyte version**: 0.29.21-alpha
- **OS Version / Instance**: AWS EC2
- **Deployment**: Docker
- **Source Connector and version**: Mongodb-v2 0.1.1
- **Destination Connector and version**: Redshift 0.3.14
- **Severity**: High
- **Step where error happened**: New Connector + Sync
## Current Behavior
When setting up a new source with this connector, schema discovery takes close to 50 minutes, and appears to be scanning entire collections from source Mongo database
When syncing records, an incremental load of 1 stream/collection < 10k records is taking > 1 hour. Comparing to the old Mongo connector, I can refresh 20 streams/collections in around 8 minutes. The connector seems to be scanning the entire collection in a much different manner than the old Ruby source
## Expected Behavior
Comparable performance to old connector, no 50 minute delay in retrieving records
## Logs
Attaching logs from initial full sync. Note the 50 minute gap before records are returned
Also including logs from the next incremental sync. Same gap
<details>
<summary>LOG from initial Full Sync</summary>
```
2021-09-24 15:09:24 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:24 INFO i.a.i.d.j.c.s.S3StreamCopier(<init>):142 - {} - S3 upload part size: 10 MB
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO c.m.d.l.SLF4JLogger(info):71 - {} - Opened connection [connectionId{localValue:3, serverValue:44254}] to sufferfestproduction-shard-00-03-naqnz.mongodb.net:27017
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO a.m.s.StreamTransferManager(getMultiPartOutputStreams):329 - {} - Initiated multipart upload to wahoo-rivery/8e240244-550f-4dfb-b8f1-e0fb3cafc251/parse/svl_Activity with full ID nd8i3Mpxr3A16tt81CwVaJyr_9vQSwG55mhvr3PaEYsAU90IrZZrsRYKhurCy5CiHuZSkjyE49BWFC92Y1pNI8k0GxFR_DtpbvLnO4ABkDN3olwES2URygz1KzaSqJqw
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO i.a.i.d.b.BufferedStreamConsumer(startTracked):143 - {} - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2021-09-24 15:59:39 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):223 - Records read: 1000
```
</details>
<details>
<summary>LOG from incremental sync</summary>
```
2021-09-24 18:02:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:02:03 INFO i.a.i.d.b.BufferedStreamConsumer(startTracked):143 - {} - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.StateDecoratingIterator(computeNext):80 - {} - State Report: stream name: AirbyteStreamNameNamespacePair{name='Activity', namespace='parse'}, original cursor field: _updated_at, original cursor 2021-09-24T17:14:22Z, cursor field: _updated_at, new cursor: 2021-09-24T18:51:11Z
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.AbstractDbSource(lambda$read$2):141 - {} - Closing database connection pool.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.AbstractDbSource(lambda$read$2):143 - {} - Closed database connection pool.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.b.IntegrationRunner(run):153 - {} - Completed integration: io.airbyte.integrations.source.mongodb.MongoDbSource
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.m.MongoDbSource(main):84 - {} - completed source: class io.airbyte.integrations.source.mongodb.MongoDbSource
2021-09-24 18:52:02 INFO () DefaultReplicationWorker(run):141 - Source thread complete.
```
</details>
## Steps to Reproduce
1. Setup new MongoDB Connection, Connect Redshift Destination, notice delay in schema discovery. Observe long running full scans on MongoDB instance
2. Execute full or incremental load of any stream from new Mongo connector --> destination
## Are you willing to submit a PR?
Unfortunately cannot at this time
|
1.0
|
Performance issues in new Mongo Source Connector - ## Enviroment
- **Airbyte version**: 0.29.21-alpha
- **OS Version / Instance**: AWS EC2
- **Deployment**: Docker
- **Source Connector and version**: Mongodb-v2 0.1.1
- **Destination Connector and version**: Redshift 0.3.14
- **Severity**: High
- **Step where error happened**: New Connector + Sync
## Current Behavior
When setting up a new source with this connector, schema discovery takes close to 50 minutes, and appears to be scanning entire collections from source Mongo database
When syncing records, an incremental load of 1 stream/collection < 10k records is taking > 1 hour. Comparing to the old Mongo connector, I can refresh 20 streams/collections in around 8 minutes. The connector seems to be scanning the entire collection in a much different manner than the old Ruby source
## Expected Behavior
Comparable performance to old connector, no 50 minute delay in retrieving records
## Logs
Attaching logs from initial full sync. Note the 50 minute gap before records are returned
Also including logs from the next incremental sync. Same gap
<details>
<summary>LOG from initial Full Sync</summary>
```
2021-09-24 15:09:24 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:24 INFO i.a.i.d.j.c.s.S3StreamCopier(<init>):142 - {} - S3 upload part size: 10 MB
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO c.m.d.l.SLF4JLogger(info):71 - {} - Opened connection [connectionId{localValue:3, serverValue:44254}] to sufferfestproduction-shard-00-03-naqnz.mongodb.net:27017
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO a.m.s.StreamTransferManager(getMultiPartOutputStreams):329 - {} - Initiated multipart upload to wahoo-rivery/8e240244-550f-4dfb-b8f1-e0fb3cafc251/parse/svl_Activity with full ID nd8i3Mpxr3A16tt81CwVaJyr_9vQSwG55mhvr3PaEYsAU90IrZZrsRYKhurCy5CiHuZSkjyE49BWFC92Y1pNI8k0GxFR_DtpbvLnO4ABkDN3olwES2URygz1KzaSqJqw
2021-09-24 15:09:25 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 15:09:25 INFO i.a.i.d.b.BufferedStreamConsumer(startTracked):143 - {} - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2021-09-24 15:59:39 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):223 - Records read: 1000
```
</details>
<details>
<summary>LOG from incremental sync</summary>
```
2021-09-24 18:02:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:02:03 INFO i.a.i.d.b.BufferedStreamConsumer(startTracked):143 - {} - class io.airbyte.integrations.destination.buffered_stream_consumer.BufferedStreamConsumer started.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.StateDecoratingIterator(computeNext):80 - {} - State Report: stream name: AirbyteStreamNameNamespacePair{name='Activity', namespace='parse'}, original cursor field: _updated_at, original cursor 2021-09-24T17:14:22Z, cursor field: _updated_at, new cursor: 2021-09-24T18:51:11Z
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.AbstractDbSource(lambda$read$2):141 - {} - Closing database connection pool.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.r.AbstractDbSource(lambda$read$2):143 - {} - Closed database connection pool.
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.b.IntegrationRunner(run):153 - {} - Completed integration: io.airbyte.integrations.source.mongodb.MongoDbSource
2021-09-24 18:52:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):73 - 2021-09-24 18:52:01 INFO i.a.i.s.m.MongoDbSource(main):84 - {} - completed source: class io.airbyte.integrations.source.mongodb.MongoDbSource
2021-09-24 18:52:02 INFO () DefaultReplicationWorker(run):141 - Source thread complete.
```
</details>
## Steps to Reproduce
1. Setup new MongoDB Connection, Connect Redshift Destination, notice delay in schema discovery. Observe long running full scans on MongoDB instance
2. Execute full or incremental load of any stream from new Mongo connector --> destination
## Are you willing to submit a PR?
Unfortunately cannot at this time
|
priority
|
performance issues in new mongo source connector enviroment airbyte version alpha os version instance aws deployment docker source connector and version mongodb destination connector and version redshift severity high step where error happened new connector sync current behavior when setting up a new source with this connector schema discovery takes close to minutes and appears to be scanning entire collections from source mongo database when syncing records an incremental load of stream collection hour comparing to the old mongo connector i can refresh streams collections in around minutes the connector seems to be scanning the entire collection in a much different manner than the old ruby source expected behavior comparable performance to old connector no minute delay in retrieving records logs attaching logs from initial full sync note the minute gap before records are returned also including logs from the next incremental sync same gap log from initial full sync info defaultairbytestreamfactory lambda create info i a i d j c s upload part size mb info defaultairbytestreamfactory lambda create info c m d l info opened connection to sufferfestproduction shard naqnz mongodb net info defaultairbytestreamfactory lambda create info a m s streamtransfermanager getmultipartoutputstreams initiated multipart upload to wahoo rivery parse svl activity with full id info defaultairbytestreamfactory lambda create info i a i d b bufferedstreamconsumer starttracked class io airbyte integrations destination buffered stream consumer bufferedstreamconsumer started info defaultreplicationworker lambda getreplicationrunnable records read log from incremental sync info defaultairbytestreamfactory lambda create info i a i d b bufferedstreamconsumer starttracked class io airbyte integrations destination buffered stream consumer bufferedstreamconsumer started info defaultairbytestreamfactory lambda create info i a i s r statedecoratingiterator computenext state report stream name airbytestreamnamenamespacepair name activity namespace parse original cursor field updated at original cursor cursor field updated at new cursor info defaultairbytestreamfactory lambda create info i a i s r abstractdbsource lambda read closing database connection pool info defaultairbytestreamfactory lambda create info i a i s r abstractdbsource lambda read closed database connection pool info defaultairbytestreamfactory lambda create info i a i b integrationrunner run completed integration io airbyte integrations source mongodb mongodbsource info defaultairbytestreamfactory lambda create info i a i s m mongodbsource main completed source class io airbyte integrations source mongodb mongodbsource info defaultreplicationworker run source thread complete steps to reproduce setup new mongodb connection connect redshift destination notice delay in schema discovery observe long running full scans on mongodb instance execute full or incremental load of any stream from new mongo connector destination are you willing to submit a pr unfortunately cannot at this time
| 1
|
342,432
| 10,316,969,080
|
IssuesEvent
|
2019-08-30 11:26:32
|
zeit/next.js
|
https://api.github.com/repos/zeit/next.js
|
closed
|
9.0.5 Dynamic imports not extracted to chunks
|
Type: Needs Investigation priority: high
|
# Bug report
## Describe the bug
After upgrading from `9.0.4` to `9.0.5`, `dynamic(() => import())` are now all merged into commons.js, and no chunks are exported. Before it would create a new chunk for each.
It only happens when building for prodution with `target: serverless`. Running in development, the chunks are still generated correctly
## Example
I've created a simple [Codesandbox](https://codesandbox.io/embed/next-905-dynamic-ybitp) that just adds a single `import()`. It correctly chunks and loads the chunk on `9.0.4`.
- **9.0.4**: https://csb-ybitp-7585dpimf.now.sh/
- **9.0.5**: https://csb-ybitp-o12p4pxlm.now.sh/
See the Network Requests.
## Expected behavior
When using the dynamic imports, they should be extracted into chunks by Webpack.
|
1.0
|
9.0.5 Dynamic imports not extracted to chunks - # Bug report
## Describe the bug
After upgrading from `9.0.4` to `9.0.5`, `dynamic(() => import())` are now all merged into commons.js, and no chunks are exported. Before it would create a new chunk for each.
It only happens when building for prodution with `target: serverless`. Running in development, the chunks are still generated correctly
## Example
I've created a simple [Codesandbox](https://codesandbox.io/embed/next-905-dynamic-ybitp) that just adds a single `import()`. It correctly chunks and loads the chunk on `9.0.4`.
- **9.0.4**: https://csb-ybitp-7585dpimf.now.sh/
- **9.0.5**: https://csb-ybitp-o12p4pxlm.now.sh/
See the Network Requests.
## Expected behavior
When using the dynamic imports, they should be extracted into chunks by Webpack.
|
priority
|
dynamic imports not extracted to chunks bug report describe the bug after upgrading from to dynamic import are now all merged into commons js and no chunks are exported before it would create a new chunk for each it only happens when building for prodution with target serverless running in development the chunks are still generated correctly example i ve created a simple that just adds a single import it correctly chunks and loads the chunk on see the network requests expected behavior when using the dynamic imports they should be extracted into chunks by webpack
| 1
|
79,527
| 3,536,130,962
|
IssuesEvent
|
2016-01-17 01:37:59
|
ecnivo/Flow
|
https://api.github.com/repos/ecnivo/Flow
|
closed
|
Owner can reduce his own access level without making anyone else the owner
|
bug highpriority
|
Then the client explodes.
|
1.0
|
Owner can reduce his own access level without making anyone else the owner - Then the client explodes.
|
priority
|
owner can reduce his own access level without making anyone else the owner then the client explodes
| 1
|
96,662
| 3,971,680,320
|
IssuesEvent
|
2016-05-04 12:56:59
|
DarkstarProject/darkstar
|
https://api.github.com/repos/DarkstarProject/darkstar
|
closed
|
Ranged monsters periodically attack players after respawning, even though they have not been aggroed/engaged.
|
High Priority
|
<!-- remove space and mark with 'x' between [] -->
**_I have:_**
- [x] searched existing issues (http://github.com/darkstarproject/darkstar/issues/) to see if the issue I am posting has already been addressed or opened by another contributor
- [x] checked the commit log to see if my issue has been resolved since my server was last updated
<!-- Issues will be closed without being looked into if the following information is missing (unless its not applicable). -->
**_Client Version_** (type `/ver` in game) **:**
30160329_1
**_Server Version_** (type `@revision` in game) **:**
bdfb082
**_Source Branch_** (master/stable) **:**
master
**_Additional Information_** (Steps to reproduce/Expected behavior) **:**
This seems to still be an issue, but only if the mob in question has been killed - then respawns; then the random shooting behavior continues.
|
1.0
|
Ranged monsters periodically attack players after respawning, even though they have not been aggroed/engaged. - <!-- remove space and mark with 'x' between [] -->
**_I have:_**
- [x] searched existing issues (http://github.com/darkstarproject/darkstar/issues/) to see if the issue I am posting has already been addressed or opened by another contributor
- [x] checked the commit log to see if my issue has been resolved since my server was last updated
<!-- Issues will be closed without being looked into if the following information is missing (unless its not applicable). -->
**_Client Version_** (type `/ver` in game) **:**
30160329_1
**_Server Version_** (type `@revision` in game) **:**
bdfb082
**_Source Branch_** (master/stable) **:**
master
**_Additional Information_** (Steps to reproduce/Expected behavior) **:**
This seems to still be an issue, but only if the mob in question has been killed - then respawns; then the random shooting behavior continues.
|
priority
|
ranged monsters periodically attack players after respawning even though they have not been aggroed engaged i have searched existing issues to see if the issue i am posting has already been addressed or opened by another contributor checked the commit log to see if my issue has been resolved since my server was last updated client version type ver in game server version type revision in game source branch master stable master additional information steps to reproduce expected behavior this seems to still be an issue but only if the mob in question has been killed then respawns then the random shooting behavior continues
| 1
|
324,796
| 9,912,607,475
|
IssuesEvent
|
2019-06-28 09:25:57
|
huridocs/uwazi
|
https://api.github.com/repos/huridocs/uwazi
|
closed
|
Articles violated filter tweaks
|
Bug Priority: High Status: Sprint
|
In CEJIL some processes with articles violated are not appearing in filters:
- Ie. https://sidh.cejil.org/en/entity/ev0qq23os7c28k1puelllvunmi?searchTerm=atenco for Articles of cBdoPará
- Can't check the groups, only the items inside
- Remove "missing" option
|
1.0
|
Articles violated filter tweaks - In CEJIL some processes with articles violated are not appearing in filters:
- Ie. https://sidh.cejil.org/en/entity/ev0qq23os7c28k1puelllvunmi?searchTerm=atenco for Articles of cBdoPará
- Can't check the groups, only the items inside
- Remove "missing" option
|
priority
|
articles violated filter tweaks in cejil some processes with articles violated are not appearing in filters ie for articles of cbdopará can t check the groups only the items inside remove missing option
| 1
|
192,113
| 6,846,823,885
|
IssuesEvent
|
2017-11-13 13:32:34
|
dmwm/WMCore
|
https://api.github.com/repos/dmwm/WMCore
|
closed
|
Ignored Phedex subscriptions to tape
|
High Priority
|
We are running with some issues with Phedex subscriptions since T0 2.1.0. On our configuration file we have (for SingleMuon PD):
do_reco = True,
write_reco = False,
raw_to_disk = True,
write_dqm = True,
tape_node = "T1_IT_CNAF_MSS",
disk_node = "T1_IT_CNAF_Disk",
But on Phedex subscriptions for "/SingleMuon/Run2017F-v1/RAW" we only have a subscription to T1_IT_CNAF_Disk[1]. Checking DBSBUFFER_DATASET_SUBSCRIPTION table, we found that there is only T1_IT_CNAF_Disk too.
I'll keep pending if you need more information about this.
[1]
https://cmsweb.cern.ch/phedex/prod/Request::View?request=1130093
|
1.0
|
Ignored Phedex subscriptions to tape - We are running with some issues with Phedex subscriptions since T0 2.1.0. On our configuration file we have (for SingleMuon PD):
do_reco = True,
write_reco = False,
raw_to_disk = True,
write_dqm = True,
tape_node = "T1_IT_CNAF_MSS",
disk_node = "T1_IT_CNAF_Disk",
But on Phedex subscriptions for "/SingleMuon/Run2017F-v1/RAW" we only have a subscription to T1_IT_CNAF_Disk[1]. Checking DBSBUFFER_DATASET_SUBSCRIPTION table, we found that there is only T1_IT_CNAF_Disk too.
I'll keep pending if you need more information about this.
[1]
https://cmsweb.cern.ch/phedex/prod/Request::View?request=1130093
|
priority
|
ignored phedex subscriptions to tape we are running with some issues with phedex subscriptions since on our configuration file we have for singlemuon pd do reco true write reco false raw to disk true write dqm true tape node it cnaf mss disk node it cnaf disk but on phedex subscriptions for singlemuon raw we only have a subscription to it cnaf disk checking dbsbuffer dataset subscription table we found that there is only it cnaf disk too i ll keep pending if you need more information about this
| 1
|
600,892
| 18,361,332,474
|
IssuesEvent
|
2021-10-09 08:56:04
|
AY2122S1-CS2103T-W10-4/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-W10-4/tp
|
opened
|
Delete contacts by tag
|
type.Story priority.High type.Enhancement
|
As a user with changing plans, I can delete contacts by their tag so my plans can be updated.
|
1.0
|
Delete contacts by tag - As a user with changing plans, I can delete contacts by their tag so my plans can be updated.
|
priority
|
delete contacts by tag as a user with changing plans i can delete contacts by their tag so my plans can be updated
| 1
|
326,186
| 9,948,498,095
|
IssuesEvent
|
2019-07-04 09:04:01
|
prisma/specs
|
https://api.github.com/repos/prisma/specs
|
closed
|
Move over existing RFCs
|
priority/high
|
- [x] PhotonGo https://github.com/prisma/rfcs/pull/20
- [x] Prisma Schema https://github.com/prisma/rfcs/pull/19
- [x] Migrations https://github.com/prisma/rfcs/pull/10
- [x] Generators https://github.com/prisma/rfcs/pull/4
- [x] Aggregations https://github.com/prisma/rfcs/pull/6
- [x] Expression https://github.com/prisma/rfcs/pull/3
|
1.0
|
Move over existing RFCs - - [x] PhotonGo https://github.com/prisma/rfcs/pull/20
- [x] Prisma Schema https://github.com/prisma/rfcs/pull/19
- [x] Migrations https://github.com/prisma/rfcs/pull/10
- [x] Generators https://github.com/prisma/rfcs/pull/4
- [x] Aggregations https://github.com/prisma/rfcs/pull/6
- [x] Expression https://github.com/prisma/rfcs/pull/3
|
priority
|
move over existing rfcs photongo prisma schema migrations generators aggregations expression
| 1
|
221,535
| 7,389,562,847
|
IssuesEvent
|
2018-03-16 09:11:28
|
Wozza365/GameDevelopment
|
https://api.github.com/repos/Wozza365/GameDevelopment
|
opened
|
Valve Opening Causes Door in Future to Disappear
|
enhancement high priority
|
When the valve placed in the hidden room is activated, the next door for progression must "rot" aka disappear.
The transition needs to be animated
Sound for rotting of the door needs to be attached.
|
1.0
|
Valve Opening Causes Door in Future to Disappear - When the valve placed in the hidden room is activated, the next door for progression must "rot" aka disappear.
The transition needs to be animated
Sound for rotting of the door needs to be attached.
|
priority
|
valve opening causes door in future to disappear when the valve placed in the hidden room is activated the next door for progression must rot aka disappear the transition needs to be animated sound for rotting of the door needs to be attached
| 1
|
595,215
| 18,061,950,961
|
IssuesEvent
|
2021-09-20 14:49:47
|
medialab/portic-storymaps-2021
|
https://api.github.com/repos/medialab/portic-storymaps-2021
|
closed
|
Static rendering bug : include visualizations highlights in atlas view (links from the home ?)
|
bug priority : high needs verification
|
https://medialab.github.io/portic-storymaps-2021/fr/atlas/intro-ports
|
1.0
|
Static rendering bug : include visualizations highlights in atlas view (links from the home ?) - https://medialab.github.io/portic-storymaps-2021/fr/atlas/intro-ports
|
priority
|
static rendering bug include visualizations highlights in atlas view links from the home
| 1
|
786,387
| 27,644,570,850
|
IssuesEvent
|
2023-03-10 21:26:21
|
Tau-ri-Dev/JSGMod-1.12.2
|
https://api.github.com/repos/Tau-ri-Dev/JSGMod-1.12.2
|
closed
|
Pegasus Gate Dialing Sound
|
Bug/Issue High priority Visual bug Confirmed
|
Describe Issue:
When dialing a Pegasus gate, if it gets a incoming wormhole while dialing, the sound doesn't stop.
Steps To Reproduce:
1. Dial Pegasus gate to random address
2. Use another gate to dial Pegasus gate while it's dialing and activate.
┆Issue is synchronized with this [Trello card](https://trello.com/c/KcInjeJK) by [Unito](https://www.unito.io)
|
1.0
|
Pegasus Gate Dialing Sound - Describe Issue:
When dialing a Pegasus gate, if it gets a incoming wormhole while dialing, the sound doesn't stop.
Steps To Reproduce:
1. Dial Pegasus gate to random address
2. Use another gate to dial Pegasus gate while it's dialing and activate.
┆Issue is synchronized with this [Trello card](https://trello.com/c/KcInjeJK) by [Unito](https://www.unito.io)
|
priority
|
pegasus gate dialing sound describe issue when dialing a pegasus gate if it gets a incoming wormhole while dialing the sound doesn t stop steps to reproduce dial pegasus gate to random address use another gate to dial pegasus gate while it s dialing and activate ┆issue is synchronized with this by
| 1
|
487,436
| 14,046,491,249
|
IssuesEvent
|
2020-11-02 04:55:06
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
User-Store Management REST API, test connection endpoint returns incorrect STATUS codes
|
Affected/5.11.0-Alpha Component/Identity REST APIs Component/User Store Mgt Priority/High Severity/Critical bug
|
**Describe the issue:**
**How to reproduce:**
1. When the test connection endpoint is invoked for non existing H2 databases , always connection is set to true
Sample request
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "org.h2.Driver",
"connectionURL": "jdbc:h2:./repository/database/non-existing",
"username": "wso2automation",
"connectionPassword": "wso2automation"
}'
```
Response: **200 OK**
```
{
"connection": true
}
```
**Strangely, observed that when the `"username": "wso2automation"` and ` "connectionPassword": "wso2automation"` combination is used, response is always true.**
2. When the test connection endpoint is invoked for MySQL with incorrect credentials, 500 Server error is returned
Sample Request:
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "com.mysql.jdbc.Driver",
"connectionURL": "jdbc:mysql://localhost:3306/test?useSSL=false",
"username": "root",
"connectionPassword": "incorrectPWD"
}'
```
Response: **500 Internal Server Error**
```
{
"code": "SUS-65008",
"message": "Unable to check RDBMS connection Health",
"description": "Server Encountered an error while checking the data source connection.",
"traceId": "e95c4d5c-17f6-41cc-8180-47fa422c9b0a"
}
```
2. When the test connection endpoint is invoked for MySQL with correct credentials but without `useSSL=false`, 500 Server error is returned
Sample Request:
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "com.mysql.jdbc.Driver",
"connectionURL": "jdbc:mysql://localhost:3306/test",
"username": "root",
"connectionPassword": "root"
}'
```
Response: **500 Internal Server Error**
```
{
"code": "SUS-65008",
"message": "Unable to check RDBMS connection Health",
"description": "Server Encountered an error while checking the data source connection.",
"traceId": "e95c4d5c-17f6-41cc-8180-47fa422c9b0a"
}
**Expected behavior:**
1. Case # 1, connection status must be false
2. Case # 2, connection status must be false or properly replicate status with correct HTTP response
3. Case # 3, connection status must be false or properly replicate status with correct HTTP response
|
1.0
|
User-Store Management REST API, test connection endpoint returns incorrect STATUS codes - **Describe the issue:**
**How to reproduce:**
1. When the test connection endpoint is invoked for non existing H2 databases , always connection is set to true
Sample request
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "org.h2.Driver",
"connectionURL": "jdbc:h2:./repository/database/non-existing",
"username": "wso2automation",
"connectionPassword": "wso2automation"
}'
```
Response: **200 OK**
```
{
"connection": true
}
```
**Strangely, observed that when the `"username": "wso2automation"` and ` "connectionPassword": "wso2automation"` combination is used, response is always true.**
2. When the test connection endpoint is invoked for MySQL with incorrect credentials, 500 Server error is returned
Sample Request:
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "com.mysql.jdbc.Driver",
"connectionURL": "jdbc:mysql://localhost:3306/test?useSSL=false",
"username": "root",
"connectionPassword": "incorrectPWD"
}'
```
Response: **500 Internal Server Error**
```
{
"code": "SUS-65008",
"message": "Unable to check RDBMS connection Health",
"description": "Server Encountered an error while checking the data source connection.",
"traceId": "e95c4d5c-17f6-41cc-8180-47fa422c9b0a"
}
```
2. When the test connection endpoint is invoked for MySQL with correct credentials but without `useSSL=false`, 500 Server error is returned
Sample Request:
```
curl --location --request POST 'https://localhost:9443/t/carbon.super/api/server/v1/userstores/test-connection' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer bea59347-ac43-331d-8d57-c438f1f25a61' \
--header 'Cookie: opbs=32e5034a-0efc-471c-987c-6f7c1d662095; commonAuthId=58cc2302-614c-4f5e-a574-02844441dd2b' \
--data-raw '{
"driverName": "com.mysql.jdbc.Driver",
"connectionURL": "jdbc:mysql://localhost:3306/test",
"username": "root",
"connectionPassword": "root"
}'
```
Response: **500 Internal Server Error**
```
{
"code": "SUS-65008",
"message": "Unable to check RDBMS connection Health",
"description": "Server Encountered an error while checking the data source connection.",
"traceId": "e95c4d5c-17f6-41cc-8180-47fa422c9b0a"
}
**Expected behavior:**
1. Case # 1, connection status must be false
2. Case # 2, connection status must be false or properly replicate status with correct HTTP response
3. Case # 3, connection status must be false or properly replicate status with correct HTTP response
|
priority
|
user store management rest api test connection endpoint returns incorrect status codes describe the issue how to reproduce when the test connection endpoint is invoked for non existing databases always connection is set to true sample request curl location request post header content type application json header authorization bearer header cookie opbs commonauthid data raw drivername org driver connectionurl jdbc repository database non existing username connectionpassword response ok connection true strangely observed that when the username and connectionpassword combination is used response is always true when the test connection endpoint is invoked for mysql with incorrect credentials server error is returned sample request curl location request post header content type application json header authorization bearer header cookie opbs commonauthid data raw drivername com mysql jdbc driver connectionurl jdbc mysql localhost test usessl false username root connectionpassword incorrectpwd response internal server error code sus message unable to check rdbms connection health description server encountered an error while checking the data source connection traceid when the test connection endpoint is invoked for mysql with correct credentials but without usessl false server error is returned sample request curl location request post header content type application json header authorization bearer header cookie opbs commonauthid data raw drivername com mysql jdbc driver connectionurl jdbc mysql localhost test username root connectionpassword root response internal server error code sus message unable to check rdbms connection health description server encountered an error while checking the data source connection traceid expected behavior case connection status must be false case connection status must be false or properly replicate status with correct http response case connection status must be false or properly replicate status with correct http response
| 1
|
580,403
| 17,243,189,181
|
IssuesEvent
|
2021-07-21 03:39:11
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Remove the module alias from detail of completion item when used in the same module
|
Area/Completion Priority/High SwanLakeDump Team/LanguageServer Type/Improvement
|
**Description:**
In the following scenario when a ReturnType resides within the same module, we do not need to specify the module part. We can only add the ReturnType in the detail of the completion item.
In the case where this comes from another module, we can specify it as `<moduleAlias>:<ReturnType>`

|
1.0
|
Remove the module alias from detail of completion item when used in the same module - **Description:**
In the following scenario when a ReturnType resides within the same module, we do not need to specify the module part. We can only add the ReturnType in the detail of the completion item.
In the case where this comes from another module, we can specify it as `<moduleAlias>:<ReturnType>`

|
priority
|
remove the module alias from detail of completion item when used in the same module description in the following scenario when a returntype resides within the same module we do not need to specify the module part we can only add the returntype in the detail of the completion item in the case where this comes from another module we can specify it as
| 1
|
494,989
| 14,269,878,721
|
IssuesEvent
|
2020-11-21 03:40:13
|
miaowware/qrm-resources
|
https://api.github.com/repos/miaowware/qrm-resources
|
opened
|
don't copy extraneous files to the deployment
|
easy priority-high
|
include a `_config.yml` at the root directory specifying theme and `exclude`s
|
1.0
|
don't copy extraneous files to the deployment - include a `_config.yml` at the root directory specifying theme and `exclude`s
|
priority
|
don t copy extraneous files to the deployment include a config yml at the root directory specifying theme and exclude s
| 1
|
743,966
| 25,921,575,837
|
IssuesEvent
|
2022-12-15 22:34:12
|
rokwire/illinois-app
|
https://api.github.com/repos/rokwire/illinois-app
|
closed
|
[FEATURE] Please add link to Video Tutorials in Campus Guide.
|
Type: Feature Request Priority: High
|
[[ @vburgett - Would you please review this and assign it to Mark when it's OK? (thanks, joe) ]]
So that we can make new videos available within the app as they are created, we would like to add new videos via Campus Guide.
Please confirm with JP, but he asked to have a landing page for Video Tutorials in Campus Guide that could be linked from Browse > App Help.
_Browse > App Help > Video Tutorial_
can point to
_Campus Guide :: Help > Video Tutorials > Video Tutorials for the Illinois App_
Make Video Tutorial plural in the menu item name.
|
1.0
|
[FEATURE] Please add link to Video Tutorials in Campus Guide. - [[ @vburgett - Would you please review this and assign it to Mark when it's OK? (thanks, joe) ]]
So that we can make new videos available within the app as they are created, we would like to add new videos via Campus Guide.
Please confirm with JP, but he asked to have a landing page for Video Tutorials in Campus Guide that could be linked from Browse > App Help.
_Browse > App Help > Video Tutorial_
can point to
_Campus Guide :: Help > Video Tutorials > Video Tutorials for the Illinois App_
Make Video Tutorial plural in the menu item name.
|
priority
|
please add link to video tutorials in campus guide so that we can make new videos available within the app as they are created we would like to add new videos via campus guide please confirm with jp but he asked to have a landing page for video tutorials in campus guide that could be linked from browse app help browse app help video tutorial can point to campus guide help video tutorials video tutorials for the illinois app make video tutorial plural in the menu item name
| 1
|
349,213
| 10,465,861,183
|
IssuesEvent
|
2019-09-21 14:28:33
|
input-output-hk/jormungandr
|
https://api.github.com/repos/input-output-hk/jormungandr
|
opened
|
Filter addresses that are not reachable
|
Priority - High jörmungandr subsys-network
|
something to add ASAP. It's important to filter addresses that are not reachable `10.0.0.0/8` or `something to add ASAP: `0.0.0.0`.
|
1.0
|
Filter addresses that are not reachable - something to add ASAP. It's important to filter addresses that are not reachable `10.0.0.0/8` or `something to add ASAP: `0.0.0.0`.
|
priority
|
filter addresses that are not reachable something to add asap it s important to filter addresses that are not reachable or something to add asap
| 1
|
631,318
| 20,150,118,477
|
IssuesEvent
|
2022-02-09 11:31:09
|
ita-social-projects/horondi_client_fe
|
https://api.github.com/repos/ita-social-projects/horondi_client_fe
|
closed
|
[Products Page. Filter] Inconsistent items are displayed when filter items by 'CATEGORY'
|
bug priority: high severity: major Functional
|
**Environment:** Windows 10 Pro 64bit, Firefox 89.0 64bit
**Reproducible:** Always
**Pre-conditions:**
Go to https://horondi-front-staging.azurewebsites.net/
Click on the appropriate category from the drop-down list at the Navigation bar (e. g. menu->backpacks->rolltop)
**Description:**
**Steps to reproduce:**
Choose 'CATEGORY' (e. g. backpacks)
**Actual result:**
Inconsistent items are displayed
**Expected result:**
The backpacks are showed
[TC_STEP#3](https://jira.softserve.academy/browse/LVHRB-214)
|
1.0
|
[Products Page. Filter] Inconsistent items are displayed when filter items by 'CATEGORY' - **Environment:** Windows 10 Pro 64bit, Firefox 89.0 64bit
**Reproducible:** Always
**Pre-conditions:**
Go to https://horondi-front-staging.azurewebsites.net/
Click on the appropriate category from the drop-down list at the Navigation bar (e. g. menu->backpacks->rolltop)
**Description:**
**Steps to reproduce:**
Choose 'CATEGORY' (e. g. backpacks)
**Actual result:**
Inconsistent items are displayed
**Expected result:**
The backpacks are showed
[TC_STEP#3](https://jira.softserve.academy/browse/LVHRB-214)
|
priority
|
inconsistent items are displayed when filter items by category environment windows pro firefox reproducible always pre conditions go to click on the appropriate category from the drop down list at the navigation bar e g menu backpacks rolltop description steps to reproduce choose category e g backpacks actual result inconsistent items are displayed expected result the backpacks are showed
| 1
|
107,163
| 4,290,421,277
|
IssuesEvent
|
2016-07-18 09:40:04
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
opened
|
[API] Introduce files upload
|
Priority - High Topic - API Topic - Core Type - New Feature
|
Uploading files via REST API is missing. Any files uploaded should be linked to a multimedia (video, image, audio, ...) or a custom object type in a configurable way.
## Upload flow
1. A client makes a file upload request
```http
POST /files/:object_type/file_name.jpg
Host: example.com
Authorization: Bearer <access_token>
Accept: application/json
Content-Type: image/jpeg
Content-Length: 284
<raw image content>
```
where `object_type` is the type of BEdita object linked to that file (for example `image`). In this way we can perform specific checks per object type before accept files. For example we may check some size/extension related to a specific type.
Furthmore every object type could have a particular way to treat the file using local filesystem, Amazon S3, ...
2. The server responds with some error if something goes wrong or with the relative path of file uploaded if it succeeded, for example
```json
{
"api": "files",
"data": {
"file": "ab/cd/filename-on-filesystem.jpg"
},
"method": "post",
"params": [],
"url": "https://example.com/api/files/image/file_name.jpg"
}
```
3. the client proceeds creating the object type with file associated to it
```http
POST /objects
Host: example.com
Authorization: Bearer <access_token>
Accept: application/json
Content-Type: application/json
{
"data": {
"object_type": "image",
"title": "Image title",
"uploaded_file: "ab/cd/filename-on-filesystem.jpg"
}
}
```
4. If `uploaded_file` is not already linked to other object and the file is supported from the specified `object_type` the server will respond with a `201 Created` with the object data as it happens now (http://bedita.readthedocs.io/en/v3.7.0/endpoints/objects.html#create-an-object).
|
1.0
|
[API] Introduce files upload - Uploading files via REST API is missing. Any files uploaded should be linked to a multimedia (video, image, audio, ...) or a custom object type in a configurable way.
## Upload flow
1. A client makes a file upload request
```http
POST /files/:object_type/file_name.jpg
Host: example.com
Authorization: Bearer <access_token>
Accept: application/json
Content-Type: image/jpeg
Content-Length: 284
<raw image content>
```
where `object_type` is the type of BEdita object linked to that file (for example `image`). In this way we can perform specific checks per object type before accept files. For example we may check some size/extension related to a specific type.
Furthmore every object type could have a particular way to treat the file using local filesystem, Amazon S3, ...
2. The server responds with some error if something goes wrong or with the relative path of file uploaded if it succeeded, for example
```json
{
"api": "files",
"data": {
"file": "ab/cd/filename-on-filesystem.jpg"
},
"method": "post",
"params": [],
"url": "https://example.com/api/files/image/file_name.jpg"
}
```
3. the client proceeds creating the object type with file associated to it
```http
POST /objects
Host: example.com
Authorization: Bearer <access_token>
Accept: application/json
Content-Type: application/json
{
"data": {
"object_type": "image",
"title": "Image title",
"uploaded_file: "ab/cd/filename-on-filesystem.jpg"
}
}
```
4. If `uploaded_file` is not already linked to other object and the file is supported from the specified `object_type` the server will respond with a `201 Created` with the object data as it happens now (http://bedita.readthedocs.io/en/v3.7.0/endpoints/objects.html#create-an-object).
|
priority
|
introduce files upload uploading files via rest api is missing any files uploaded should be linked to a multimedia video image audio or a custom object type in a configurable way upload flow a client makes a file upload request http post files object type file name jpg host example com authorization bearer accept application json content type image jpeg content length where object type is the type of bedita object linked to that file for example image in this way we can perform specific checks per object type before accept files for example we may check some size extension related to a specific type furthmore every object type could have a particular way to treat the file using local filesystem amazon the server responds with some error if something goes wrong or with the relative path of file uploaded if it succeeded for example json api files data file ab cd filename on filesystem jpg method post params url the client proceeds creating the object type with file associated to it http post objects host example com authorization bearer accept application json content type application json data object type image title image title uploaded file ab cd filename on filesystem jpg if uploaded file is not already linked to other object and the file is supported from the specified object type the server will respond with a created with the object data as it happens now
| 1
|
155,479
| 5,956,086,286
|
IssuesEvent
|
2017-05-28 13:35:39
|
restlet/restlet-framework-java
|
https://api.github.com/repos/restlet/restlet-framework-java
|
closed
|
typo in agent.properties file triggers lots of logging
|
Priority: high Type: bug Version: 2.3
|
The fix for #1238 introduced a new issue due to a typo in the agent.properties file. This file now contains "{a,gentOs}" for Firefox for Windows; where the comma was likely added by accident. As a result, all requests to the server now log a warning "An invalid character was detected inside a pattern variable : null".
|
1.0
|
typo in agent.properties file triggers lots of logging - The fix for #1238 introduced a new issue due to a typo in the agent.properties file. This file now contains "{a,gentOs}" for Firefox for Windows; where the comma was likely added by accident. As a result, all requests to the server now log a warning "An invalid character was detected inside a pattern variable : null".
|
priority
|
typo in agent properties file triggers lots of logging the fix for introduced a new issue due to a typo in the agent properties file this file now contains a gentos for firefox for windows where the comma was likely added by accident as a result all requests to the server now log a warning an invalid character was detected inside a pattern variable null
| 1
|
76,922
| 3,506,069,210
|
IssuesEvent
|
2016-01-08 03:14:52
|
dcbaker/piglit
|
https://api.github.com/repos/dcbaker/piglit
|
closed
|
use junit for summary generation
|
enhancement High Priority
|
being able to import junit for summary generation would be useful.
It might be worth trying to encode additional information in the junit somewhere, maybe there's a comment tag or something that we can use?
|
1.0
|
use junit for summary generation - being able to import junit for summary generation would be useful.
It might be worth trying to encode additional information in the junit somewhere, maybe there's a comment tag or something that we can use?
|
priority
|
use junit for summary generation being able to import junit for summary generation would be useful it might be worth trying to encode additional information in the junit somewhere maybe there s a comment tag or something that we can use
| 1
|
611,450
| 18,955,551,150
|
IssuesEvent
|
2021-11-18 19:48:25
|
episphere/connectApp
|
https://api.github.com/repos/episphere/connectApp
|
opened
|
Stuck in SIB/CHILD loop when not providing a response to SIB/CHILD
|
High Priority MVP Mod 1 Skip pattern
|
I did not answer SIB (no response) and SIBCONFIRM said I have "0 undefined" siblings and then brought me through the sibling loop. I clicked back to see if this happened when I entered "0" at SIB and the loop did not happen (which is correct, so I think the loop issue only happens when the participant does not respond to SIB).
I tried duplicating this issue at CHILD and the same thing happened at CHILDCONFIRM (picture)

Instead of clicking back and entering "0" at CHILD like I did for the SIB loop, I clicked next, and then got stuck in the Child Loop. I went through 5 loops before clicking back to CHILD and entering 0, so that I could get out of the loop.
|
1.0
|
Stuck in SIB/CHILD loop when not providing a response to SIB/CHILD - I did not answer SIB (no response) and SIBCONFIRM said I have "0 undefined" siblings and then brought me through the sibling loop. I clicked back to see if this happened when I entered "0" at SIB and the loop did not happen (which is correct, so I think the loop issue only happens when the participant does not respond to SIB).
I tried duplicating this issue at CHILD and the same thing happened at CHILDCONFIRM (picture)

Instead of clicking back and entering "0" at CHILD like I did for the SIB loop, I clicked next, and then got stuck in the Child Loop. I went through 5 loops before clicking back to CHILD and entering 0, so that I could get out of the loop.
|
priority
|
stuck in sib child loop when not providing a response to sib child i did not answer sib no response and sibconfirm said i have undefined siblings and then brought me through the sibling loop i clicked back to see if this happened when i entered at sib and the loop did not happen which is correct so i think the loop issue only happens when the participant does not respond to sib i tried duplicating this issue at child and the same thing happened at childconfirm picture instead of clicking back and entering at child like i did for the sib loop i clicked next and then got stuck in the child loop i went through loops before clicking back to child and entering so that i could get out of the loop
| 1
|
547,379
| 16,042,051,228
|
IssuesEvent
|
2021-04-22 09:06:14
|
IgniteUI/igniteui-angular
|
https://api.github.com/repos/IgniteUI/igniteui-angular
|
closed
|
IgxGridToolbarHidingComponent - checkAllText and uncheckAllText set different action button
|
bug grid: toolbar grid: toolbar-hiding priority: high status: resolved version: 11.1.x version: 12.0.x
|
## Description
Describe the issue.
I added a `[checkAllText]` to the `<igx-grid-toolbar-hiding>` and instead of setting the button that selects all columns hidden, it sets the one that cleans up the selection and vice-versa for the `uncheckAllText` input.
* igniteui-angular version: 11.1.8
* browser: Chrome Version 89.0.4389.128
## Steps to reproduce
1. Step 1
create a igxGrid
2. Step 2
```<igx-grid-toolbar #toolbar>
<igx-grid-toolbar-actions>
<igx-grid-toolbar-hiding #toolbarHiding [title]="'LABELS.COLUMN_HIDDING' | translate"
[checkAllText]="'Select ALL text'" [uncheckAllText]="'UNSELECT ALL TEXT'"></igx-grid-toolbar-hiding>
</igx-grid-toolbar-actions>
</igx-grid-toolbar>
```
3. `npm start` to start the server and check if the buttons names are correct
## Result
What is the actual result after following the steps to reproduce?
button with `checkAllText` text set should trigger unselect columns
button with `uncheckAllText` text set should trigger select columns
## Expected result
What is the expected result after following the steps to reproduce?
button with `checkAllText` text set should trigger select all columns
button with `uncheckAllText` text set should trigger unselect all columns
|
1.0
|
IgxGridToolbarHidingComponent - checkAllText and uncheckAllText set different action button - ## Description
Describe the issue.
I added a `[checkAllText]` to the `<igx-grid-toolbar-hiding>` and instead of setting the button that selects all columns hidden, it sets the one that cleans up the selection and vice-versa for the `uncheckAllText` input.
* igniteui-angular version: 11.1.8
* browser: Chrome Version 89.0.4389.128
## Steps to reproduce
1. Step 1
create a igxGrid
2. Step 2
```<igx-grid-toolbar #toolbar>
<igx-grid-toolbar-actions>
<igx-grid-toolbar-hiding #toolbarHiding [title]="'LABELS.COLUMN_HIDDING' | translate"
[checkAllText]="'Select ALL text'" [uncheckAllText]="'UNSELECT ALL TEXT'"></igx-grid-toolbar-hiding>
</igx-grid-toolbar-actions>
</igx-grid-toolbar>
```
3. `npm start` to start the server and check if the buttons names are correct
## Result
What is the actual result after following the steps to reproduce?
button with `checkAllText` text set should trigger unselect columns
button with `uncheckAllText` text set should trigger select columns
## Expected result
What is the expected result after following the steps to reproduce?
button with `checkAllText` text set should trigger select all columns
button with `uncheckAllText` text set should trigger unselect all columns
|
priority
|
igxgridtoolbarhidingcomponent checkalltext and uncheckalltext set different action button description describe the issue i added a to the and instead of setting the button that selects all columns hidden it sets the one that cleans up the selection and vice versa for the uncheckalltext input igniteui angular version browser chrome version steps to reproduce step create a igxgrid step igx grid toolbar hiding toolbarhiding labels column hidding translate select all text unselect all text npm start to start the server and check if the buttons names are correct result what is the actual result after following the steps to reproduce button with checkalltext text set should trigger unselect columns button with uncheckalltext text set should trigger select columns expected result what is the expected result after following the steps to reproduce button with checkalltext text set should trigger select all columns button with uncheckalltext text set should trigger unselect all columns
| 1
|
778,926
| 27,333,537,957
|
IssuesEvent
|
2023-02-25 23:16:54
|
foss-lodpm/lpm
|
https://api.github.com/repos/foss-lodpm/lpm
|
closed
|
update-upgrade operations
|
enhancement high priority alpha
|
Since #7 is done, now delete & update-upgrade operations could be implemented to the lpm.
|
1.0
|
update-upgrade operations - Since #7 is done, now delete & update-upgrade operations could be implemented to the lpm.
|
priority
|
update upgrade operations since is done now delete update upgrade operations could be implemented to the lpm
| 1
|
28,236
| 2,700,636,119
|
IssuesEvent
|
2015-04-04 11:36:13
|
cs2103jan2015-f09-4j/main
|
https://api.github.com/repos/cs2103jan2015-f09-4j/main
|
closed
|
Add a non-time sensitive task(floating).
|
priority.high status.ongoing type.story.YES
|
The floating task is specified to include no deadline, possibly to cater to the user’s(U) need, aka, the inability to commit to the task as of now. It should also allow the user to modify it, specifically allowing U to add deadline(at a later date) and descriptions.
|
1.0
|
Add a non-time sensitive task(floating). - The floating task is specified to include no deadline, possibly to cater to the user’s(U) need, aka, the inability to commit to the task as of now. It should also allow the user to modify it, specifically allowing U to add deadline(at a later date) and descriptions.
|
priority
|
add a non time sensitive task floating the floating task is specified to include no deadline possibly to cater to the user’s u need aka the inability to commit to the task as of now it should also allow the user to modify it specifically allowing u to add deadline at a later date and descriptions
| 1
|
62,082
| 3,171,688,306
|
IssuesEvent
|
2015-09-23 00:19:23
|
SCIInstitute/Seg3D
|
https://api.github.com/repos/SCIInstitute/Seg3D
|
closed
|
python actions not returning new layer id
|
action bug high priority python
|
Python actions that return layer ids are returning the original target layer ID in threshold tool, not the newly created output layer ID. Probably a bug in action context.
|
1.0
|
python actions not returning new layer id - Python actions that return layer ids are returning the original target layer ID in threshold tool, not the newly created output layer ID. Probably a bug in action context.
|
priority
|
python actions not returning new layer id python actions that return layer ids are returning the original target layer id in threshold tool not the newly created output layer id probably a bug in action context
| 1
|
483,389
| 13,923,935,995
|
IssuesEvent
|
2020-10-21 14:58:50
|
AY2021S1-CS2103T-W15-4/tp
|
https://api.github.com/repos/AY2021S1-CS2103T-W15-4/tp
|
closed
|
As a zookeeper, I can sort all the animals under my care by name
|
priority.High type.Story
|
... so that I can refer to a specific animal more conveniently.
|
1.0
|
As a zookeeper, I can sort all the animals under my care by name - ... so that I can refer to a specific animal more conveniently.
|
priority
|
as a zookeeper i can sort all the animals under my care by name so that i can refer to a specific animal more conveniently
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.