Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
608,013 | 18,796,008,366 | IssuesEvent | 2021-11-08 22:25:43 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | 1D convolution is broken for mkldnn tensors | high priority module: nn module: convolution module: mkldnn module: correctness (silent) | ## 🐛 Bug
1D convolution is broken for mkldnn tensors and a badly-worded error is thrown for this case.
## To Reproduce
```python
import torch
input = torch.randn(2, 3, 10).to_mkldnn()
weight = torch.randn(3, 3, 3).to_mkldnn()
bias = torch.randn(3).to_mkldnn()
output = torch.nn.functional.conv1d(input, weight, bias)
```
```
RuntimeError: opaque tensors do not have strides
```
## Expected behavior
Either the correct output is returned for mkldnn tensor inputs or a proper error is thrown indicating the lack of support for 1D convolution with mkldnn tensors.
## Additional Context
The problem occurs when trying to view 1D spatial input / weight as 2D. This is done for other backends (e.g. cuDNN) that don't support 1D spatial input directly. However, the [mkldnn convolution docs](https://oneapi-src.github.io/oneDNN/v1.0/dev_guide_convolution.html) indicate that 1D spatial inputs are supported directly, so it should be an easy fix to avoid the view for the mkldnn case.
https://github.com/pytorch/pytorch/blob/a1d733ae8ca4bbad8fa22a4a532b67915114ae93/aten/src/ATen/native/Convolution.cpp#L889-L897 | 1.0 | 1D convolution is broken for mkldnn tensors - ## 🐛 Bug
1D convolution is broken for mkldnn tensors and a badly-worded error is thrown for this case.
## To Reproduce
```python
import torch
input = torch.randn(2, 3, 10).to_mkldnn()
weight = torch.randn(3, 3, 3).to_mkldnn()
bias = torch.randn(3).to_mkldnn()
output = torch.nn.functional.conv1d(input, weight, bias)
```
```
RuntimeError: opaque tensors do not have strides
```
## Expected behavior
Either the correct output is returned for mkldnn tensor inputs or a proper error is thrown indicating the lack of support for 1D convolution with mkldnn tensors.
## Additional Context
The problem occurs when trying to view 1D spatial input / weight as 2D. This is done for other backends (e.g. cuDNN) that don't support 1D spatial input directly. However, the [mkldnn convolution docs](https://oneapi-src.github.io/oneDNN/v1.0/dev_guide_convolution.html) indicate that 1D spatial inputs are supported directly, so it should be an easy fix to avoid the view for the mkldnn case.
https://github.com/pytorch/pytorch/blob/a1d733ae8ca4bbad8fa22a4a532b67915114ae93/aten/src/ATen/native/Convolution.cpp#L889-L897 | priority | convolution is broken for mkldnn tensors 🐛 bug convolution is broken for mkldnn tensors and a badly worded error is thrown for this case to reproduce python import torch input torch randn to mkldnn weight torch randn to mkldnn bias torch randn to mkldnn output torch nn functional input weight bias runtimeerror opaque tensors do not have strides expected behavior either the correct output is returned for mkldnn tensor inputs or a proper error is thrown indicating the lack of support for convolution with mkldnn tensors additional context the problem occurs when trying to view spatial input weight as this is done for other backends e g cudnn that don t support spatial input directly however the indicate that spatial inputs are supported directly so it should be an easy fix to avoid the view for the mkldnn case | 1 |
690,214 | 23,650,668,131 | IssuesEvent | 2022-08-26 06:12:33 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Add a Drafts tab for drafts in current narrow | help wanted area: compose priority: high | As [discussed on CZO](https://chat.zulip.org/#narrow/stream/101-design/topic/save.20and.20clear.20button.20design/near/1359699), it would be helpful to be able to view drafts addressed to the current narrow, especially in light of #18555.
To address this, we should make a tabbed drafts UI with the following tabs:
* **This conversation**: Drafts for the current narrow, i.e. the compose box narrow if compose box is open, or otherwise the narrow that `r` would refer to.
* **All**: Same as the drafts view we have today.
We should try being smart about which tab to show when the user opens drafts:
- Open the "All" tab and disable the "This conversation" tab when there are no drafts in the current context.
- Open the "This conversation" tab from the compose box Drafts link.
- (perhaps) Open the "All" tab from the left sidebar link, or maybe make it dependent on whether the compose box is open of closed. We'll need to experiment here.
#20971 should be implemented as a "Scheduled" tab under Drafts. We can implement the "Scheduled" tab and the "This conversation" tab in either order.
| 1.0 | Add a Drafts tab for drafts in current narrow - As [discussed on CZO](https://chat.zulip.org/#narrow/stream/101-design/topic/save.20and.20clear.20button.20design/near/1359699), it would be helpful to be able to view drafts addressed to the current narrow, especially in light of #18555.
To address this, we should make a tabbed drafts UI with the following tabs:
* **This conversation**: Drafts for the current narrow, i.e. the compose box narrow if compose box is open, or otherwise the narrow that `r` would refer to.
* **All**: Same as the drafts view we have today.
We should try being smart about which tab to show when the user opens drafts:
- Open the "All" tab and disable the "This conversation" tab when there are no drafts in the current context.
- Open the "This conversation" tab from the compose box Drafts link.
- (perhaps) Open the "All" tab from the left sidebar link, or maybe make it dependent on whether the compose box is open of closed. We'll need to experiment here.
#20971 should be implemented as a "Scheduled" tab under Drafts. We can implement the "Scheduled" tab and the "This conversation" tab in either order.
| priority | add a drafts tab for drafts in current narrow as it would be helpful to be able to view drafts addressed to the current narrow especially in light of to address this we should make a tabbed drafts ui with the following tabs this conversation drafts for the current narrow i e the compose box narrow if compose box is open or otherwise the narrow that r would refer to all same as the drafts view we have today we should try being smart about which tab to show when the user opens drafts open the all tab and disable the this conversation tab when there are no drafts in the current context open the this conversation tab from the compose box drafts link perhaps open the all tab from the left sidebar link or maybe make it dependent on whether the compose box is open of closed we ll need to experiment here should be implemented as a scheduled tab under drafts we can implement the scheduled tab and the this conversation tab in either order | 1 |
75,788 | 3,475,865,732 | IssuesEvent | 2015-12-26 06:24:58 | speedovation/kiwi | https://api.github.com/repos/speedovation/kiwi | closed | Php Heredoc and Nowdoc support | 3 - Done High Priority | Add support for
Heredoc and Nowodc with proper syntax highlighting
* HTML
* CSS
* JS
* SQL
<!---
@huboard:{"milestone_order":9.094947017729282e-13,"order":5.684341886080802e-14,"custom_state":""}
-->
| 1.0 | Php Heredoc and Nowdoc support - Add support for
Heredoc and Nowodc with proper syntax highlighting
* HTML
* CSS
* JS
* SQL
<!---
@huboard:{"milestone_order":9.094947017729282e-13,"order":5.684341886080802e-14,"custom_state":""}
-->
| priority | php heredoc and nowdoc support add support for heredoc and nowodc with proper syntax highlighting html css js sql huboard milestone order order custom state | 1 |
307,573 | 9,418,850,975 | IssuesEvent | 2019-04-10 20:19:04 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack | Fixed High Priority Push Candidate |
[log.txt](https://github.com/StrangeLoopGames/EcoIssues/files/2973960/log.txt)
Got this twice when trying to drag stuff out of a stockpile the same time as someone else,0.8.0.7
Server encountered an exception:
<size=60.00%>Exception: ArgumentException
Message:Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack'.
Source:mscorlib
System.ArgumentException: Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack'.
at System.RuntimeType.CheckValue (System.Object value, System.Reflection.Binder binder, System.Globalization.CultureInfo culture, System.Reflection.BindingFlags invokeAttr) [0x00071] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MonoMethod.ConvertValues (System.Reflection.Binder binder, System.Object[] args, System.Reflection.ParameterInfo[] pinfo, System.Globalization.CultureInfo culture, System.Reflection.BindingFlags invokeAttr) [0x00069] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00011] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <2943701620b54f86b436d3ffad010412>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONObject bsonArgs, System.Object& result) [0x00074] in <e2ef92c2851e48349a7bb563b39facd2>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00056] in <e2ef92c2851e48349a7bb563b39facd2>:0
at Eco.Core.Controller.ControllerManager.HandleViewRPC (Eco.Shared.Networking.INetClient client, System.Int32 controllerID, System.String methodname, Eco.Shared.Serialization.BSONObject bson) [0x00007] in <e12b9c3fd01845b4ba12cb89bfab028b>:0
at Eco.Plugins.Networking.Client.ViewRPC (Eco.Shared.Networking.INetClient client, System.Int32 id, System.String methodname, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <aad317c740ad4ca1b489827c0eec9b2f>:0
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke(System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x0003b] in <2943701620b54f86b436d3ffad010412>:0</size>
| 1.0 | Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack -
[log.txt](https://github.com/StrangeLoopGames/EcoIssues/files/2973960/log.txt)
Got this twice when trying to drag stuff out of a stockpile the same time as someone else,0.8.0.7
Server encountered an exception:
<size=60.00%>Exception: ArgumentException
Message:Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack'.
Source:mscorlib
System.ArgumentException: Object of type 'Eco.Gameplay.Items.AuthorizationInventory' cannot be converted to type 'Eco.Gameplay.Items.ItemStack'.
at System.RuntimeType.CheckValue (System.Object value, System.Reflection.Binder binder, System.Globalization.CultureInfo culture, System.Reflection.BindingFlags invokeAttr) [0x00071] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MonoMethod.ConvertValues (System.Reflection.Binder binder, System.Object[] args, System.Reflection.ParameterInfo[] pinfo, System.Globalization.CultureInfo culture, System.Reflection.BindingFlags invokeAttr) [0x00069] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00011] in <2943701620b54f86b436d3ffad010412>:0
at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <2943701620b54f86b436d3ffad010412>:0
at Eco.Shared.Networking.RPCManager.TryInvoke (Eco.Shared.Networking.INetClient client, System.Object target, System.String methodname, Eco.Shared.Serialization.BSONObject bsonArgs, System.Object& result) [0x00074] in <e2ef92c2851e48349a7bb563b39facd2>:0
at Eco.Shared.Networking.RPCManager.InvokeOn (Eco.Shared.Networking.INetClient client, Eco.Shared.Serialization.BSONObject bson, System.Object target, System.String methodname) [0x00056] in <e2ef92c2851e48349a7bb563b39facd2>:0
at Eco.Core.Controller.ControllerManager.HandleViewRPC (Eco.Shared.Networking.INetClient client, System.Int32 controllerID, System.String methodname, Eco.Shared.Serialization.BSONObject bson) [0x00007] in <e12b9c3fd01845b4ba12cb89bfab028b>:0
at Eco.Plugins.Networking.Client.ViewRPC (Eco.Shared.Networking.INetClient client, System.Int32 id, System.String methodname, Eco.Shared.Serialization.BSONObject bson) [0x00000] in <aad317c740ad4ca1b489827c0eec9b2f>:0
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke(System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x0003b] in <2943701620b54f86b436d3ffad010412>:0</size>
| priority | object of type eco gameplay items authorizationinventory cannot be converted to type eco gameplay items itemstack got this twice when trying to drag stuff out of a stockpile the same time as someone else server encountered an exception exception argumentexception message object of type eco gameplay items authorizationinventory cannot be converted to type eco gameplay items itemstack source mscorlib system argumentexception object of type eco gameplay items authorizationinventory cannot be converted to type eco gameplay items itemstack at system runtimetype checkvalue system object value system reflection binder binder system globalization cultureinfo culture system reflection bindingflags invokeattr in at system reflection monomethod convertvalues system reflection binder binder system object args system reflection parameterinfo pinfo system globalization cultureinfo culture system reflection bindingflags invokeattr in at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in at system reflection methodbase invoke system object obj system object parameters in at eco shared networking rpcmanager tryinvoke eco shared networking inetclient client system object target system string methodname eco shared serialization bsonobject bsonargs system object result in at eco shared networking rpcmanager invokeon eco shared networking inetclient client eco shared serialization bsonobject bson system object target system string methodname in at eco core controller controllermanager handleviewrpc eco shared networking inetclient client system controllerid system string methodname eco shared serialization bsonobject bson in at eco plugins networking client viewrpc eco shared networking inetclient client system id system string methodname eco shared serialization bsonobject bson in at wrapper managed to native system reflection monomethod internalinvoke system reflection monomethod object object system exception at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in | 1 |
550,913 | 16,134,488,209 | IssuesEvent | 2021-04-29 09:58:10 | sopra-fs21-group-26/client | https://api.github.com/repos/sopra-fs21-group-26/client | closed | Implement Leave Lobby | high priority task | <h2>Sub-Tasks:</h2>
- [x] CSS
- [x] Leave Button
~~- [ ] Admin can leave~~
- [x] Player can leave
<h2>Estimate: 2h</h2> | 1.0 | Implement Leave Lobby - <h2>Sub-Tasks:</h2>
- [x] CSS
- [x] Leave Button
~~- [ ] Admin can leave~~
- [x] Player can leave
<h2>Estimate: 2h</h2> | priority | implement leave lobby sub tasks css leave button admin can leave player can leave estimate | 1 |
297,308 | 9,166,899,843 | IssuesEvent | 2019-03-02 08:12:24 | Luca1152/gravity-box | https://api.github.com/repos/Luca1152/gravity-box | closed | Save/load functionality | Priority: High Status: In Progress Type: Enhancement | ## Description
The player should be able to save the maps he creates in the level editor, but also load them back.
## Tasks
- [x] Define how a maps file would look like
- [x] Add save to JSON functionality
- [x] Fix the id of map objects to be unique, and not 0
- [x] Add load from JSON functionality | 1.0 | Save/load functionality - ## Description
The player should be able to save the maps he creates in the level editor, but also load them back.
## Tasks
- [x] Define how a maps file would look like
- [x] Add save to JSON functionality
- [x] Fix the id of map objects to be unique, and not 0
- [x] Add load from JSON functionality | priority | save load functionality description the player should be able to save the maps he creates in the level editor but also load them back tasks define how a maps file would look like add save to json functionality fix the id of map objects to be unique and not add load from json functionality | 1 |
625,683 | 19,760,767,026 | IssuesEvent | 2022-01-16 11:27:24 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Finalise terrain and heightmap painting | priority: high mapping | Finalise terrain painting, particularly:
- Oases and variety in the Dune Sea
- Adding source hills for Harnen tributary streams
- Fixing terrain clipping in shallow seas (particularly near Tulwang)
- Water colour map (particularly visible line in south and river colouring) | 1.0 | Finalise terrain and heightmap painting - Finalise terrain painting, particularly:
- Oases and variety in the Dune Sea
- Adding source hills for Harnen tributary streams
- Fixing terrain clipping in shallow seas (particularly near Tulwang)
- Water colour map (particularly visible line in south and river colouring) | priority | finalise terrain and heightmap painting finalise terrain painting particularly oases and variety in the dune sea adding source hills for harnen tributary streams fixing terrain clipping in shallow seas particularly near tulwang water colour map particularly visible line in south and river colouring | 1 |
399,137 | 11,743,465,076 | IssuesEvent | 2020-03-12 04:37:03 | AY1920S2-CS2103T-F10-2/main | https://api.github.com/repos/AY1920S2-CS2103T-F10-2/main | opened | As a user I want to tag each application with a status | priority.High type.Story | ... so that I can track my internship application phase | 1.0 | As a user I want to tag each application with a status - ... so that I can track my internship application phase | priority | as a user i want to tag each application with a status so that i can track my internship application phase | 1 |
142,843 | 5,477,929,740 | IssuesEvent | 2017-03-12 13:36:55 | CS2103JAN2017-T15-B1/main | https://api.github.com/repos/CS2103JAN2017-T15-B1/main | closed | Update PersonCard so that it shows task fields and not person fields | priority.high type.task | To fulfill #11, #14
Update, including its name, so that it no longer lists
- address, phone and email
and instread lists
- (optional) deadline
- priority
- description | 1.0 | Update PersonCard so that it shows task fields and not person fields - To fulfill #11, #14
Update, including its name, so that it no longer lists
- address, phone and email
and instread lists
- (optional) deadline
- priority
- description | priority | update personcard so that it shows task fields and not person fields to fulfill update including its name so that it no longer lists address phone and email and instread lists optional deadline priority description | 1 |
51,977 | 3,016,274,387 | IssuesEvent | 2015-07-30 00:56:14 | pombase/pombase-chado | https://api.github.com/repos/pombase/pombase-chado | closed | Store expression correctly | high priority | The expression of an allele is now separate from the rest of the allele data in the Canto JSON export flie. It should now be stored as a `feature_relationshipprop` on the relationship between the genotypes and the alleles, not as a `featureprop` of the allele. | 1.0 | Store expression correctly - The expression of an allele is now separate from the rest of the allele data in the Canto JSON export flie. It should now be stored as a `feature_relationshipprop` on the relationship between the genotypes and the alleles, not as a `featureprop` of the allele. | priority | store expression correctly the expression of an allele is now separate from the rest of the allele data in the canto json export flie it should now be stored as a feature relationshipprop on the relationship between the genotypes and the alleles not as a featureprop of the allele | 1 |
358,975 | 10,652,345,979 | IssuesEvent | 2019-10-17 12:26:03 | AY1920S1-CS2103T-F11-3/main | https://api.github.com/repos/AY1920S1-CS2103T-F11-3/main | closed | Add feature for file encryption and decryption | priority.High type.Epic | Add commands to support file encryption and decryption, as well as the data model and logic necessary to keep track of encrypted files. | 1.0 | Add feature for file encryption and decryption - Add commands to support file encryption and decryption, as well as the data model and logic necessary to keep track of encrypted files. | priority | add feature for file encryption and decryption add commands to support file encryption and decryption as well as the data model and logic necessary to keep track of encrypted files | 1 |
747,890 | 26,101,938,486 | IssuesEvent | 2022-12-27 08:18:46 | bounswe/bounswe2022group4 | https://api.github.com/repos/bounswe/bounswe2022group4 | closed | Mobile: User and Post Search | Category - To Do Category - Enhancement Priority - High Status: In Progress Difficulty - Medium Language - Kotlin Mobile | ### Description:
users should be able to search users and posts.
### What to do:
- [ ] Search button on home fragment should open a new fragment search fragment
- [ ] After an input that long than 3 char should call post and user search and display them
### Deadline
12.26.2022, 22.00(GMT+3)
| 1.0 | Mobile: User and Post Search - ### Description:
users should be able to search users and posts.
### What to do:
- [ ] Search button on home fragment should open a new fragment search fragment
- [ ] After an input that long than 3 char should call post and user search and display them
### Deadline
12.26.2022, 22.00(GMT+3)
| priority | mobile user and post search description users should be able to search users and posts what to do search button on home fragment should open a new fragment search fragment after an input that long than char should call post and user search and display them deadline gmt | 1 |
709,739 | 24,388,874,730 | IssuesEvent | 2022-10-04 13:51:58 | kubeshop/testkube | https://api.github.com/repos/kubeshop/testkube | closed | Throttling homebrew-core PR submissions | bug 🐛 high-priority | Hi 👋 , just to raise an issue here to notify some spam ban on the homebrew-core side, it would be nice that you folks can throttle the PR submissions. Thanks!
relates to
- https://github.com/Homebrew/homebrew-core/pull/111524
- https://github.com/Homebrew/homebrew-core/pull/107721
| 1.0 | Throttling homebrew-core PR submissions - Hi 👋 , just to raise an issue here to notify some spam ban on the homebrew-core side, it would be nice that you folks can throttle the PR submissions. Thanks!
relates to
- https://github.com/Homebrew/homebrew-core/pull/111524
- https://github.com/Homebrew/homebrew-core/pull/107721
| priority | throttling homebrew core pr submissions hi 👋 just to raise an issue here to notify some spam ban on the homebrew core side it would be nice that you folks can throttle the pr submissions thanks relates to | 1 |
192,959 | 6,877,593,277 | IssuesEvent | 2017-11-20 08:42:59 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | opened | TLS LDAP does not support STARTTLS | Category: Drivers - Auth Priority: High Status: Pending Tracker: Backlog | ---
Author Name: **EOLE Team** (EOLE Team)
Original Redmine Issue: 3482, https://dev.opennebula.org/issues/3482
Original Date: 2015-01-03
---
Hello,
Using ONE 4.8 and 4.10, I try to switch the LDAP authentication to TLS and found that only LDAP over SSL (by default on port *@636@*) is working.
First, a note should be added to the documentation about this issue.
I propose to modify the *@encryption@* in "configuration file":http://docs.opennebula.org/4.10/administration/authentication/ldap.html#configuration with the following possibilities:
* *@:null@* to disable encryption (by default)
* *@:simple_tls@* to use LDAP over SSL
* *@:starttls@* to use the "STARTTLS":https://en.wikipedia.org/wiki/STARTTLS
With the following configuration example:
```
server 1:
[...]
# Ldap server
:host: localhost
# No encryption by default on standart port
# Uncomment this line to use STARTTLS
#:encryption: :starttls
:port: 389
# Uncomment this lines to use LDAP over SSL on ldaps port
#:encryption: :simple_tls
#:port: 636
[...]
```
Thanks.
| 1.0 | TLS LDAP does not support STARTTLS - ---
Author Name: **EOLE Team** (EOLE Team)
Original Redmine Issue: 3482, https://dev.opennebula.org/issues/3482
Original Date: 2015-01-03
---
Hello,
Using ONE 4.8 and 4.10, I try to switch the LDAP authentication to TLS and found that only LDAP over SSL (by default on port *@636@*) is working.
First, a note should be added to the documentation about this issue.
I propose to modify the *@encryption@* in "configuration file":http://docs.opennebula.org/4.10/administration/authentication/ldap.html#configuration with the following possibilities:
* *@:null@* to disable encryption (by default)
* *@:simple_tls@* to use LDAP over SSL
* *@:starttls@* to use the "STARTTLS":https://en.wikipedia.org/wiki/STARTTLS
With the following configuration example:
```
server 1:
[...]
# Ldap server
:host: localhost
# No encryption by default on standart port
# Uncomment this line to use STARTTLS
#:encryption: :starttls
:port: 389
# Uncomment this lines to use LDAP over SSL on ldaps port
#:encryption: :simple_tls
#:port: 636
[...]
```
Thanks.
| priority | tls ldap does not support starttls author name eole team eole team original redmine issue original date hello using one and i try to switch the ldap authentication to tls and found that only ldap over ssl by default on port is working first a note should be added to the documentation about this issue i propose to modify the encryption in configuration file with the following possibilities null to disable encryption by default simple tls to use ldap over ssl starttls to use the starttls with the following configuration example server ldap server host localhost no encryption by default on standart port uncomment this line to use starttls encryption starttls port uncomment this lines to use ldap over ssl on ldaps port encryption simple tls port thanks | 1 |
578,988 | 17,169,628,902 | IssuesEvent | 2021-07-15 01:05:45 | parallel-finance/parallel | https://api.github.com/repos/parallel-finance/parallel | closed | check on-chain staking's missing parts | high priority | - [x] amount to stake, unstake
- [x] Exchange Rate
- [x] leverage staking
- [x] Staking APY
- [x] Total Stakers
- [x] xKSM Market Cap
- [x] Bonded Ratio (not sure to understand)
- [x] Staking Fee
- [x] Pending Unstake
- [x] Validator Sets
- [x] Delivery date
<img width="671" alt="image" src="https://user-images.githubusercontent.com/33961674/122855363-b94e7e80-d347-11eb-92e5-8166f747f54c.png"> | 1.0 | check on-chain staking's missing parts - - [x] amount to stake, unstake
- [x] Exchange Rate
- [x] leverage staking
- [x] Staking APY
- [x] Total Stakers
- [x] xKSM Market Cap
- [x] Bonded Ratio (not sure to understand)
- [x] Staking Fee
- [x] Pending Unstake
- [x] Validator Sets
- [x] Delivery date
<img width="671" alt="image" src="https://user-images.githubusercontent.com/33961674/122855363-b94e7e80-d347-11eb-92e5-8166f747f54c.png"> | priority | check on chain staking s missing parts amount to stake unstake exchange rate leverage staking staking apy total stakers xksm market cap bonded ratio not sure to understand staking fee pending unstake validator sets delivery date img width alt image src | 1 |
585,296 | 17,484,440,828 | IssuesEvent | 2021-08-09 09:07:42 | faktaoklimatu/web-core | https://api.github.com/repos/faktaoklimatu/web-core | opened | Remove cache workaround | bug 3: high priority | Currently, production cache is dropped to 10 minutes (0ddb93d) due to publishing AR6 infographic updates.
Discuss if such situations are going to be more frequent. If so, consider adjusting image loading. | 1.0 | Remove cache workaround - Currently, production cache is dropped to 10 minutes (0ddb93d) due to publishing AR6 infographic updates.
Discuss if such situations are going to be more frequent. If so, consider adjusting image loading. | priority | remove cache workaround currently production cache is dropped to minutes due to publishing infographic updates discuss if such situations are going to be more frequent if so consider adjusting image loading | 1 |
708,594 | 24,347,340,747 | IssuesEvent | 2022-10-02 13:49:34 | AY2223S1-CS2103T-T13-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T13-2/tp | closed | Delete client information | type.Story priority.High | ### User story
As a financial advisor, I want to be able to delete client information, so that I do not have unwanted clients stored.
### Acceptance criteria
- [ ] Update command syntax
| 1.0 | Delete client information - ### User story
As a financial advisor, I want to be able to delete client information, so that I do not have unwanted clients stored.
### Acceptance criteria
- [ ] Update command syntax
| priority | delete client information user story as a financial advisor i want to be able to delete client information so that i do not have unwanted clients stored acceptance criteria update command syntax | 1 |
190,162 | 6,810,476,796 | IssuesEvent | 2017-11-05 06:10:31 | localstack/localstack | https://api.github.com/repos/localstack/localstack | closed | S3api put-bucket-notification-configuration does not add filters from aws CLI | bug feature-missing priority-high | localstack does not add filters while configuring S3 bucket Events for QueueConfiguration
command which i am using:
```
aws --endpoint-url http://192.168.99.100:9072 s3api put-bucket-notification-configuration --bucket 800a2c5d-9b64-440a-9e70-836e673b8fa6 --notification-configuration file://C:/Persnl/LocalStack/notification.json
notification.json:
{
"QueueConfigurations": [
{
"Id": "1",
"QueueArn": "arn:aws:sqs:us-east-1:123456789012:gehc-cds-local-test",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix",
"Value": "upload/"
},
{
"Name": "suffix",
"Value": "upload-manifest.cos"
}
]
}
}
}
]
}
```
but when i am running command to see applied configuration:
```
aws --endpoint-url http://192.168.99.100:9072 s3api get-bucket-notification-configuration --bucket 800a2c5d-9b64-440a-9e70-836e673b8fa6
```
i am getting output without filter:
```
{
"QueueConfigurations": [
{
"Id": "6cfb6e5f-667e-451c-8a10-4d5589ab5d25",
"QueueArn": "arn:aws:sqs:us-east-1:123456789012:gehc-cds-local-test",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
``` | 1.0 | S3api put-bucket-notification-configuration does not add filters from aws CLI - localstack does not add filters while configuring S3 bucket Events for QueueConfiguration
command which i am using:
```
aws --endpoint-url http://192.168.99.100:9072 s3api put-bucket-notification-configuration --bucket 800a2c5d-9b64-440a-9e70-836e673b8fa6 --notification-configuration file://C:/Persnl/LocalStack/notification.json
notification.json:
{
"QueueConfigurations": [
{
"Id": "1",
"QueueArn": "arn:aws:sqs:us-east-1:123456789012:gehc-cds-local-test",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix",
"Value": "upload/"
},
{
"Name": "suffix",
"Value": "upload-manifest.cos"
}
]
}
}
}
]
}
```
but when i am running command to see applied configuration:
```
aws --endpoint-url http://192.168.99.100:9072 s3api get-bucket-notification-configuration --bucket 800a2c5d-9b64-440a-9e70-836e673b8fa6
```
i am getting output without filter:
```
{
"QueueConfigurations": [
{
"Id": "6cfb6e5f-667e-451c-8a10-4d5589ab5d25",
"QueueArn": "arn:aws:sqs:us-east-1:123456789012:gehc-cds-local-test",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
``` | priority | put bucket notification configuration does not add filters from aws cli localstack does not add filters while configuring bucket events for queueconfiguration command which i am using aws endpoint url put bucket notification configuration bucket notification configuration file c persnl localstack notification json notification json queueconfigurations id queuearn arn aws sqs us east gehc cds local test events filter key filterrules name prefix value upload name suffix value upload manifest cos but when i am running command to see applied configuration aws endpoint url get bucket notification configuration bucket i am getting output without filter queueconfigurations id queuearn arn aws sqs us east gehc cds local test events objectcreated | 1 |
341,601 | 10,299,280,383 | IssuesEvent | 2019-08-28 13:11:43 | UniversityOfHelsinkiCS/fuksilaiterekisteri | https://api.github.com/repos/UniversityOfHelsinkiCS/fuksilaiterekisteri | closed | make sure checker scripts don't blow up the system during oodi maintenance | enhancement high priority | test to see if checker scripts blow up when oodi is down
next downtime is (presumably) on 1st of Sep | 1.0 | make sure checker scripts don't blow up the system during oodi maintenance - test to see if checker scripts blow up when oodi is down
next downtime is (presumably) on 1st of Sep | priority | make sure checker scripts don t blow up the system during oodi maintenance test to see if checker scripts blow up when oodi is down next downtime is presumably on of sep | 1 |
170,128 | 6,424,714,187 | IssuesEvent | 2017-08-09 14:05:13 | mreishman/Log-Hog | https://api.github.com/repos/mreishman/Log-Hog | closed | Force refresh all every x poll requests | enhancement Priority - 1 - Very High | - [x] Option to force refresh all files every x poll requests
- [x] Option to force refresh also by unlocking poll request if hasn't changed in x poll requests
(Default every 120) - 1 min for 500ms, 2 min for 1000ms | 1.0 | Force refresh all every x poll requests - - [x] Option to force refresh all files every x poll requests
- [x] Option to force refresh also by unlocking poll request if hasn't changed in x poll requests
(Default every 120) - 1 min for 500ms, 2 min for 1000ms | priority | force refresh all every x poll requests option to force refresh all files every x poll requests option to force refresh also by unlocking poll request if hasn t changed in x poll requests default every min for min for | 1 |
528,902 | 15,376,774,664 | IssuesEvent | 2021-03-02 16:20:34 | Systems-Learning-and-Development-Lab/MMM | https://api.github.com/repos/Systems-Learning-and-Development-Lab/MMM | closed | Button קבוצה | priority-high | Until the bug with group name will be solved, remove button

Comment: as far as I understand we don't need it now, but if we need so leave it and explain why.
@Ron-Teller
| 1.0 | Button קבוצה - Until the bug with group name will be solved, remove button

Comment: as far as I understand we don't need it now, but if we need so leave it and explain why.
@Ron-Teller
| priority | button קבוצה until the bug with group name will be solved remove button comment as far as i understand we don t need it now but if we need so leave it and explain why ron teller | 1 |
469,102 | 13,501,535,564 | IssuesEvent | 2020-09-13 03:05:41 | wise-old-man/wise-old-man | https://api.github.com/repos/wise-old-man/wise-old-man | closed | Duration undefined in competition created discord event | bug priority-high | This is because this event, unlike all the others, doesn't get the full competition info from `getDetails`, which calculates the duration. | 1.0 | Duration undefined in competition created discord event - This is because this event, unlike all the others, doesn't get the full competition info from `getDetails`, which calculates the duration. | priority | duration undefined in competition created discord event this is because this event unlike all the others doesn t get the full competition info from getdetails which calculates the duration | 1 |
392,322 | 11,590,135,120 | IssuesEvent | 2020-02-24 05:32:01 | ncssar/sign-in | https://api.github.com/repos/ncssar/sign-in | closed | inactivity timers | Priority: High enhancement forNextMeeting | - from lookup screen, return to keypad screen after n seconds of inactivity
- from sign-in / sign-out screen, start flashing to get user's attention after very short inactivity, then return to keypad screen after a bit more inactivity (if signing out, it's probably acceptable to leave them as dnso (did not sign out) since dnso is a lesser problem than accidentally signing out someone else who is actually still here or in the field; but, how should this be handled for signing in?) | 1.0 | inactivity timers - - from lookup screen, return to keypad screen after n seconds of inactivity
- from sign-in / sign-out screen, start flashing to get user's attention after very short inactivity, then return to keypad screen after a bit more inactivity (if signing out, it's probably acceptable to leave them as dnso (did not sign out) since dnso is a lesser problem than accidentally signing out someone else who is actually still here or in the field; but, how should this be handled for signing in?) | priority | inactivity timers from lookup screen return to keypad screen after n seconds of inactivity from sign in sign out screen start flashing to get user s attention after very short inactivity then return to keypad screen after a bit more inactivity if signing out it s probably acceptable to leave them as dnso did not sign out since dnso is a lesser problem than accidentally signing out someone else who is actually still here or in the field but how should this be handled for signing in | 1 |
300,650 | 9,211,575,562 | IssuesEvent | 2019-03-09 16:32:11 | qgisissuebot/QGIS | https://api.github.com/repos/qgisissuebot/QGIS | closed | QGIS Crashed Windows when close qis | Bug Priority: high | ---
Author Name: **Martín Fernando Ortiz** (Martín Fernando Ortiz)
Original Redmine Issue: 20970, https://issues.qgis.org/issues/20970
Original Date: 2019-01-11T10:36:47.354Z
Affected QGIS version: 3.4.2
---
## User Feedback
It didnt crash... but when i closed qgis always appear the QGIS Crashed Windows
## Report Details
*Crash ID*: a82e7e07e3c0e2dbce4f2adf5452fe54341e44a9
*Stack Trace*
```
proj_lpz_dist :
proj_lpz_dist :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::~QgsCoordinateTransform :
QgsFirstRunDialog::tr :
QObjectPrivate::deleteChildren :
QWidget::~QWidget :
CPLStringList::operator[] :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
```
*QGIS Info*
QGIS Version: 3.4.2-Madeira
QGIS code revision: 22034aa070
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.3.2
Running against GDAL: 2.3.2
*System Info*
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17134
| 1.0 | QGIS Crashed Windows when close qis - ---
Author Name: **Martín Fernando Ortiz** (Martín Fernando Ortiz)
Original Redmine Issue: 20970, https://issues.qgis.org/issues/20970
Original Date: 2019-01-11T10:36:47.354Z
Affected QGIS version: 3.4.2
---
## User Feedback
It didnt crash... but when i closed qgis always appear the QGIS Crashed Windows
## Report Details
*Crash ID*: a82e7e07e3c0e2dbce4f2adf5452fe54341e44a9
*Stack Trace*
```
proj_lpz_dist :
proj_lpz_dist :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::transformPolygon :
QgsCoordinateTransform::~QgsCoordinateTransform :
QgsFirstRunDialog::tr :
QObjectPrivate::deleteChildren :
QWidget::~QWidget :
CPLStringList::operator[] :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
```
*QGIS Info*
QGIS Version: 3.4.2-Madeira
QGIS code revision: 22034aa070
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.3.2
Running against GDAL: 2.3.2
*System Info*
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17134
| priority | qgis crashed windows when close qis author name martín fernando ortiz martín fernando ortiz original redmine issue original date affected qgis version user feedback it didnt crash but when i closed qgis always appear the qgis crashed windows report details crash id stack trace proj lpz dist proj lpz dist qgscoordinatetransform transformpolygon qgscoordinatetransform transformpolygon qgscoordinatetransform qgscoordinatetransform qgsfirstrundialog tr qobjectprivate deletechildren qwidget qwidget cplstringlist operator main basethreadinitthunk rtluserthreadstart qgis info qgis version madeira qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version | 1 |
669,582 | 22,632,444,302 | IssuesEvent | 2022-06-30 15:41:53 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | [Bug]: Lot of timeouts waiting for agent reply, part of #incident-41 | bug priority/high area/ACLK | ### Bug description
an unusually high amount of timeouts on queries to the agent from the cloud
this also causes initial sync to be way too slow
additional issues found during incident-41 have to be investigated/fixed separately
### Expected behavior
no timeouts if the network connection is OK
### Steps to reproduce
1. claim new agent with mqtt5 enabled
2.
3.
...
### Installation method
other
### System info
```shell
Linux
```
### Netdata build info
```shell
mqtt5 enabled
```
### Additional info
_No response_ | 1.0 | [Bug]: Lot of timeouts waiting for agent reply, part of #incident-41 - ### Bug description
an unusually high amount of timeouts on queries to the agent from the cloud
this also causes initial sync to be way too slow
additional issues found during incident-41 have to be investigated/fixed separately
### Expected behavior
no timeouts if the network connection is OK
### Steps to reproduce
1. claim new agent with mqtt5 enabled
2.
3.
...
### Installation method
other
### System info
```shell
Linux
```
### Netdata build info
```shell
mqtt5 enabled
```
### Additional info
_No response_ | priority | lot of timeouts waiting for agent reply part of incident bug description an unusually high amount of timeouts on queries to the agent from the cloud this also causes initial sync to be way too slow additional issues found during incident have to be investigated fixed separately expected behavior no timeouts if the network connection is ok steps to reproduce claim new agent with enabled installation method other system info shell linux netdata build info shell enabled additional info no response | 1 |
444,587 | 12,814,753,167 | IssuesEvent | 2020-07-04 20:49:19 | ctm/mb2-doc | https://api.github.com/repos/ctm/mb2-doc | opened | MeMyself still has table height issues | can't reproduce chore high priority | > memyself: Cliff, is fixing the height of the game window still on your list? It still opens with the bell and timer buttons below the Windows 10 taskbar.
> memyself: by below, I mean underneath. I have to resize the window to see everything.
> deadhead: That one fell off my radar. Does the lobby show up underneath?
> deadhead: I did make an adjustment a while back, do you remember things changing at all?
> memyself: The lobby changed size, but the game window didn't.
commit ade14a1f9 subtracts 30 from avail_height, but that either isn't happening or wasn't enough.
I'll fiddle around locally and try changing the 30 to 100. Hopefully that won't hose anyone.
| 1.0 | MeMyself still has table height issues - > memyself: Cliff, is fixing the height of the game window still on your list? It still opens with the bell and timer buttons below the Windows 10 taskbar.
> memyself: by below, I mean underneath. I have to resize the window to see everything.
> deadhead: That one fell off my radar. Does the lobby show up underneath?
> deadhead: I did make an adjustment a while back, do you remember things changing at all?
> memyself: The lobby changed size, but the game window didn't.
commit ade14a1f9 subtracts 30 from avail_height, but that either isn't happening or wasn't enough.
I'll fiddle around locally and try changing the 30 to 100. Hopefully that won't hose anyone.
| priority | memyself still has table height issues memyself cliff is fixing the height of the game window still on your list it still opens with the bell and timer buttons below the windows taskbar memyself by below i mean underneath i have to resize the window to see everything deadhead that one fell off my radar does the lobby show up underneath deadhead i did make an adjustment a while back do you remember things changing at all memyself the lobby changed size but the game window didn t commit subtracts from avail height but that either isn t happening or wasn t enough i ll fiddle around locally and try changing the to hopefully that won t hose anyone | 1 |
373,291 | 11,038,546,033 | IssuesEvent | 2019-12-08 14:44:23 | jonfroehlich/makeabilitylabwebsite | https://api.github.com/repos/jonfroehlich/makeabilitylabwebsite | closed | Our auto-capitalization function is a bit too aggressive | Priority: High bug | Some paper titles need capitalization like 'BodyVis' or 'SIG' but this is mangled with new auto-capitalization functionality.


| 1.0 | Our auto-capitalization function is a bit too aggressive - Some paper titles need capitalization like 'BodyVis' or 'SIG' but this is mangled with new auto-capitalization functionality.


| priority | our auto capitalization function is a bit too aggressive some paper titles need capitalization like bodyvis or sig but this is mangled with new auto capitalization functionality | 1 |
155,643 | 5,958,505,515 | IssuesEvent | 2017-05-29 08:09:20 | GeekyAnts/NativeBase | https://api.github.com/repos/GeekyAnts/NativeBase | closed | Flatlist instead of ListView in List Component | enhancement high priority | ## Version Info
* react-native:0.42.0
* react: 15.4.2
* native-base: ^2.0.11
## Expected behaviour: Performance Improvement
## Actual behaviour: Slow loading
## Issue Type:
- [x] Feature
- [ ] Bug
## Performance Issue:
Android
iOS.
If You could do this it would make the rendering of the lists faster and would also help to handle large or bad data in the list's dataSource | 1.0 | Flatlist instead of ListView in List Component - ## Version Info
* react-native:0.42.0
* react: 15.4.2
* native-base: ^2.0.11
## Expected behaviour: Performance Improvement
## Actual behaviour: Slow loading
## Issue Type:
- [x] Feature
- [ ] Bug
## Performance Issue:
Android
iOS.
If You could do this it would make the rendering of the lists faster and would also help to handle large or bad data in the list's dataSource | priority | flatlist instead of listview in list component version info react native react native base expected behaviour performance improvement actual behaviour slow loading issue type feature bug performance issue android ios if you could do this it would make the rendering of the lists faster and would also help to handle large or bad data in the list s datasource | 1 |
715,338 | 24,594,850,343 | IssuesEvent | 2022-10-14 07:24:14 | AY2223S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2223S1-CS2103T-W16-3/tp | closed | As a hospital staff I can assign diagnoses to patients in their appointments | type.Story priority.High | ...so that I can store the diagnosis results and prescribed medication for each appointment separately. | 1.0 | As a hospital staff I can assign diagnoses to patients in their appointments - ...so that I can store the diagnosis results and prescribed medication for each appointment separately. | priority | as a hospital staff i can assign diagnoses to patients in their appointments so that i can store the diagnosis results and prescribed medication for each appointment separately | 1 |
119,082 | 4,760,870,171 | IssuesEvent | 2016-10-25 05:37:19 | csasf/members | https://api.github.com/repos/csasf/members | opened | Mailgun: Hook up password reset emails | high-priority | Devise (our authentication solution) has the ability to send out password reset emails. We should use this as a first test of mailgun at a broad scale. | 1.0 | Mailgun: Hook up password reset emails - Devise (our authentication solution) has the ability to send out password reset emails. We should use this as a first test of mailgun at a broad scale. | priority | mailgun hook up password reset emails devise our authentication solution has the ability to send out password reset emails we should use this as a first test of mailgun at a broad scale | 1 |
100,501 | 4,097,482,830 | IssuesEvent | 2016-06-03 01:53:49 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | closed | Cross-platform handing of private-mode | Needs-verification PRIORITY: High | How to detect whether a window is in private-mode (incognito)?
CHROME
http://developer.chrome.com/dev/extensions/extension.html#property-inIncognitoContext
FF(via JSM)
https://developer.mozilla.org/EN/docs/Supporting_per-window_private_browsing
FF(via SDK)
` if (require("sdk/private-browsing").isPrivate(wnd)) { }` | 1.0 | Cross-platform handing of private-mode - How to detect whether a window is in private-mode (incognito)?
CHROME
http://developer.chrome.com/dev/extensions/extension.html#property-inIncognitoContext
FF(via JSM)
https://developer.mozilla.org/EN/docs/Supporting_per-window_private_browsing
FF(via SDK)
` if (require("sdk/private-browsing").isPrivate(wnd)) { }` | priority | cross platform handing of private mode how to detect whether a window is in private mode incognito chrome ff via jsm ff via sdk if require sdk private browsing isprivate wnd | 1 |
377,084 | 11,163,387,688 | IssuesEvent | 2019-12-26 22:19:52 | phillipmonroe/Tabletop-Master | https://api.github.com/repos/phillipmonroe/Tabletop-Master | opened | Choose Campaign | priority-high story | As a user I would like to be able to choose which campaign I would like so that I can access the proper resources
**Acceptance Criteria:**
- [ ] Given a user is logged in to a valid account, when the user first logs in then they will be greeted with a page to select which campaign they would like to use.
- [ ] Given a user is logged in to a valid account, when the user selects on a campaign, then they will be directed to that campaign and it's resources. | 1.0 | Choose Campaign - As a user I would like to be able to choose which campaign I would like so that I can access the proper resources
**Acceptance Criteria:**
- [ ] Given a user is logged in to a valid account, when the user first logs in then they will be greeted with a page to select which campaign they would like to use.
- [ ] Given a user is logged in to a valid account, when the user selects on a campaign, then they will be directed to that campaign and it's resources. | priority | choose campaign as a user i would like to be able to choose which campaign i would like so that i can access the proper resources acceptance criteria given a user is logged in to a valid account when the user first logs in then they will be greeted with a page to select which campaign they would like to use given a user is logged in to a valid account when the user selects on a campaign then they will be directed to that campaign and it s resources | 1 |
632,152 | 20,175,133,716 | IssuesEvent | 2022-02-10 13:55:41 | reconness/reconness-frontend | https://api.github.com/repos/reconness/reconness-frontend | closed | [BUG] Details popup changes its content in Agent list | bug priority: high severity: major | **Describe the bug**
In Agent list, clicking in different Details links, will cause information changes everytime
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Agent list
2. Click on Details in any agent item and a popup will be displayed with selected Agent information
3. Cick on details in another agent item without closing the firt aget details popup. You will see that both Details popups will display the information of the last clicked agent item
4. Repeat in another agent item and now the 3 details popups will display the same info, related to the last clicked agent item
5. Also, notice that the text is not contained inside the Details popup container
**Expected behavior**
Any open Details popup should be closed when user clicks in another Agent item Details link or, if all Details popup will remain open, they should display the content related to its agent item and not change everytime
**Screenshots**



| 1.0 | [BUG] Details popup changes its content in Agent list - **Describe the bug**
In Agent list, clicking in different Details links, will cause information changes everytime
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Agent list
2. Click on Details in any agent item and a popup will be displayed with selected Agent information
3. Cick on details in another agent item without closing the firt aget details popup. You will see that both Details popups will display the information of the last clicked agent item
4. Repeat in another agent item and now the 3 details popups will display the same info, related to the last clicked agent item
5. Also, notice that the text is not contained inside the Details popup container
**Expected behavior**
Any open Details popup should be closed when user clicks in another Agent item Details link or, if all Details popup will remain open, they should display the content related to its agent item and not change everytime
**Screenshots**



| priority | details popup changes its content in agent list describe the bug in agent list clicking in different details links will cause information changes everytime to reproduce steps to reproduce the behavior go to agent list click on details in any agent item and a popup will be displayed with selected agent information cick on details in another agent item without closing the firt aget details popup you will see that both details popups will display the information of the last clicked agent item repeat in another agent item and now the details popups will display the same info related to the last clicked agent item also notice that the text is not contained inside the details popup container expected behavior any open details popup should be closed when user clicks in another agent item details link or if all details popup will remain open they should display the content related to its agent item and not change everytime screenshots | 1 |
527,191 | 15,325,428,378 | IssuesEvent | 2021-02-26 01:17:31 | jcsnorlax97/rentr | https://api.github.com/repos/jcsnorlax97/rentr | closed | [TASK] Fix Tool bar styling issue on dev | High Priority dev-task frontend | Post ad button should be displayed after user's logged in, and should be on the same line | 1.0 | [TASK] Fix Tool bar styling issue on dev - Post ad button should be displayed after user's logged in, and should be on the same line | priority | fix tool bar styling issue on dev post ad button should be displayed after user s logged in and should be on the same line | 1 |
52,003 | 3,016,743,607 | IssuesEvent | 2015-07-30 06:24:24 | earwig/mwparserfromhell | https://api.github.com/repos/earwig/mwparserfromhell | closed | Automatically build Windows Wheels | aspect: other priority: high | We can probably auto-build wheels on release using http://www.appveyor.com/ ; I'll fiddle around with it sometime soon(tm). | 1.0 | Automatically build Windows Wheels - We can probably auto-build wheels on release using http://www.appveyor.com/ ; I'll fiddle around with it sometime soon(tm). | priority | automatically build windows wheels we can probably auto build wheels on release using i ll fiddle around with it sometime soon tm | 1 |
272,535 | 8,514,546,784 | IssuesEvent | 2018-10-31 18:52:40 | GCTC-NTGC/TalentCloud | https://api.github.com/repos/GCTC-NTGC/TalentCloud | opened | BUG - Manager - Multilingual fields on job poster are not being saved | BED High Priority Medium Complexity | # Description
When creating a job, any fields that must be saved in both french and english (eg title, impact) are not being saved. This is happening whether its created through a form OR through db:seed. | 1.0 | BUG - Manager - Multilingual fields on job poster are not being saved - # Description
When creating a job, any fields that must be saved in both french and english (eg title, impact) are not being saved. This is happening whether its created through a form OR through db:seed. | priority | bug manager multilingual fields on job poster are not being saved description when creating a job any fields that must be saved in both french and english eg title impact are not being saved this is happening whether its created through a form or through db seed | 1 |
241,841 | 7,834,958,923 | IssuesEvent | 2018-06-16 20:53:03 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Briefcase teleporters have no limit on how many items they can teleport at a time, and can absolutely destroy the server. | Bug Priority: High | [Round ID]: # 89253
[Reproduction]: # Buy briefcase teleporter. Put a large amount of items on it. Teleport them back and forth, and watch the server freeze for increasing amounts of time. It's likely due to the massive amount of sparks this creates rather than the teleporting itself, so maybe it's fixable by just reducing the sparks to 1 per teleport, instead of 1 per item?
This was discovered in VR where a syndie research thingy person brought a couple hundred carp and were teleporting them back and forth until they were found and deleted. I don't even think this stuff is logged. | 1.0 | Briefcase teleporters have no limit on how many items they can teleport at a time, and can absolutely destroy the server. - [Round ID]: # 89253
[Reproduction]: # Buy briefcase teleporter. Put a large amount of items on it. Teleport them back and forth, and watch the server freeze for increasing amounts of time. It's likely due to the massive amount of sparks this creates rather than the teleporting itself, so maybe it's fixable by just reducing the sparks to 1 per teleport, instead of 1 per item?
This was discovered in VR where a syndie research thingy person brought a couple hundred carp and were teleporting them back and forth until they were found and deleted. I don't even think this stuff is logged. | priority | briefcase teleporters have no limit on how many items they can teleport at a time and can absolutely destroy the server buy briefcase teleporter put a large amount of items on it teleport them back and forth and watch the server freeze for increasing amounts of time it s likely due to the massive amount of sparks this creates rather than the teleporting itself so maybe it s fixable by just reducing the sparks to per teleport instead of per item this was discovered in vr where a syndie research thingy person brought a couple hundred carp and were teleporting them back and forth until they were found and deleted i don t even think this stuff is logged | 1 |
408,842 | 11,953,109,898 | IssuesEvent | 2020-04-03 20:13:13 | seung-lab/neuroglancer | https://api.github.com/repos/seung-lab/neuroglancer | closed | Cannot get shareable raw link without VPN | Priority: High Realm: SeungLab Type: Bug | When I am using Neuroglancer outside of the lab and without the use of the VPN, I cannot get the legacy style link. After hitting share I get a message "Posting state to dynamicannotationserver...". I shouldn't need to have access to the link shortening server to get the raw link.
Edit: It seems like after a minute, the request does time out and I can hit "cancel" to get the raw link. This is still a bug though because the time to wait is way too long and having to hit cancel to get a raw link is not intuitive. | 1.0 | Cannot get shareable raw link without VPN - When I am using Neuroglancer outside of the lab and without the use of the VPN, I cannot get the legacy style link. After hitting share I get a message "Posting state to dynamicannotationserver...". I shouldn't need to have access to the link shortening server to get the raw link.
Edit: It seems like after a minute, the request does time out and I can hit "cancel" to get the raw link. This is still a bug though because the time to wait is way too long and having to hit cancel to get a raw link is not intuitive. | priority | cannot get shareable raw link without vpn when i am using neuroglancer outside of the lab and without the use of the vpn i cannot get the legacy style link after hitting share i get a message posting state to dynamicannotationserver i shouldn t need to have access to the link shortening server to get the raw link edit it seems like after a minute the request does time out and i can hit cancel to get the raw link this is still a bug though because the time to wait is way too long and having to hit cancel to get a raw link is not intuitive | 1 |
478,825 | 13,786,227,317 | IssuesEvent | 2020-10-09 01:10:23 | colab-coop/hello-voter | https://api.github.com/repos/colab-coop/hello-voter | closed | Remove "Technical Support" button from Help page | Highest Priority | - Remove "Technical Support" button from the Help page for all organizations | 1.0 | Remove "Technical Support" button from Help page - - Remove "Technical Support" button from the Help page for all organizations | priority | remove technical support button from help page remove technical support button from the help page for all organizations | 1 |
559,001 | 16,547,215,949 | IssuesEvent | 2021-05-28 02:30:32 | The-Academic-Observatory/observatory-platform | https://api.github.com/repos/The-Academic-Observatory/observatory-platform | closed | OAeBU Dashboard Mockup in Kibana/ES | OAeBU_Mellon: Elasticsearch_Kibana Priority: High Type: Enhancement | Below is an overview of steps for creating the Mellon/OAeBN Mock-up dashboard.
1. **Create Ver 1.0 of a combined OAEBU schema based on HERMIOS and the 4 data source from the 2 partners**. – Alkim and I worked on this on Friday but did not finish this completely today. He and I will have another chat Monday about the schema. I am sure will need to iterate as this schema is just a starting point, and I know everyone has been thinking about this a lot so far. I expect it to evolve a lot as the whole ID/parts issues are addressed.
2. **Create Dummy Datasets for ANU and UCL using the above schema**. Alkim and I will revisit on Monday as part of locking in the schema. The understanding is that the raw fields from [Google Analytics, GoogleBooks, OAPEN, and JSTOR] will need processing to extract the required data. Although down the track the Telescopes etc. will end up in BigQuery, for this Dummy Data we are just creating this is GoogleSheets/Excel as that is now suitable for the import into ES.
3. **Import the Dummy dataset into Elasticsearch using the new schema**: using the manual CSV upload option (instead of the python/JSON-mapping method used in the OP). When uploading we will need to make sure that the mappings have suitable types for aggregation and text fields are mapped suitable for filtering, as this is an issue with the current data. Titles were a particular topic to watch out for. https://www.elastic.co/blog/importing-csv-and-log-data-into-elasticsearch-with-file-data-visualizer
4. **Update the ANU/UCL Dashboards to match the Mock-up data**. This will hopefully address the issue with being able to filter by title AND time
5. **Check Security is OK** the new read-only roles I have created called (names on Slack). Alkim has accounts already set up that map to these.
6. In the New spaces on K-Dev (i.e. “oaebu-anu-press” and “oaebu-ucl-press” I have updated the advance settings to make the spaces a bit more user friendly (see my tech notes linked to on Slack)
7. Iterate if needed on the above :-)
Solving this issue will make obsolete #346 and #349
| 1.0 | OAeBU Dashboard Mockup in Kibana/ES - Below is an overview of steps for creating the Mellon/OAeBN Mock-up dashboard.
1. **Create Ver 1.0 of a combined OAEBU schema based on HERMIOS and the 4 data source from the 2 partners**. – Alkim and I worked on this on Friday but did not finish this completely today. He and I will have another chat Monday about the schema. I am sure will need to iterate as this schema is just a starting point, and I know everyone has been thinking about this a lot so far. I expect it to evolve a lot as the whole ID/parts issues are addressed.
2. **Create Dummy Datasets for ANU and UCL using the above schema**. Alkim and I will revisit on Monday as part of locking in the schema. The understanding is that the raw fields from [Google Analytics, GoogleBooks, OAPEN, and JSTOR] will need processing to extract the required data. Although down the track the Telescopes etc. will end up in BigQuery, for this Dummy Data we are just creating this is GoogleSheets/Excel as that is now suitable for the import into ES.
3. **Import the Dummy dataset into Elasticsearch using the new schema**: using the manual CSV upload option (instead of the python/JSON-mapping method used in the OP). When uploading we will need to make sure that the mappings have suitable types for aggregation and text fields are mapped suitable for filtering, as this is an issue with the current data. Titles were a particular topic to watch out for. https://www.elastic.co/blog/importing-csv-and-log-data-into-elasticsearch-with-file-data-visualizer
4. **Update the ANU/UCL Dashboards to match the Mock-up data**. This will hopefully address the issue with being able to filter by title AND time
5. **Check Security is OK** the new read-only roles I have created called (names on Slack). Alkim has accounts already set up that map to these.
6. In the New spaces on K-Dev (i.e. “oaebu-anu-press” and “oaebu-ucl-press” I have updated the advance settings to make the spaces a bit more user friendly (see my tech notes linked to on Slack)
7. Iterate if needed on the above :-)
Solving this issue will make obsolete #346 and #349
| priority | oaebu dashboard mockup in kibana es below is an overview of steps for creating the mellon oaebn mock up dashboard create ver of a combined oaebu schema based on hermios and the data source from the partners – alkim and i worked on this on friday but did not finish this completely today he and i will have another chat monday about the schema i am sure will need to iterate as this schema is just a starting point and i know everyone has been thinking about this a lot so far i expect it to evolve a lot as the whole id parts issues are addressed create dummy datasets for anu and ucl using the above schema alkim and i will revisit on monday as part of locking in the schema the understanding is that the raw fields from will need processing to extract the required data although down the track the telescopes etc will end up in bigquery for this dummy data we are just creating this is googlesheets excel as that is now suitable for the import into es import the dummy dataset into elasticsearch using the new schema using the manual csv upload option instead of the python json mapping method used in the op when uploading we will need to make sure that the mappings have suitable types for aggregation and text fields are mapped suitable for filtering as this is an issue with the current data titles were a particular topic to watch out for update the anu ucl dashboards to match the mock up data this will hopefully address the issue with being able to filter by title and time check security is ok the new read only roles i have created called names on slack alkim has accounts already set up that map to these in the new spaces on k dev i e “oaebu anu press” and “oaebu ucl press” i have updated the advance settings to make the spaces a bit more user friendly see my tech notes linked to on slack iterate if needed on the above solving this issue will make obsolete and | 1 |
529,737 | 15,394,733,253 | IssuesEvent | 2021-03-03 18:17:30 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Developer Portal shows all the versions of the API regardless of display_multiple_versions | API-M 4.0.0 Priority/High Type/Bug | ### Description:
Originally reported by @tharindu1st
## Steps to reproduce:
1. Create a few versions of an API and Publish them
2. Check from the developer portal and all the versions are shown even though `apim.devportal.display_multiple_versions` is set to false (default value)

| 1.0 | Developer Portal shows all the versions of the API regardless of display_multiple_versions - ### Description:
Originally reported by @tharindu1st
## Steps to reproduce:
1. Create a few versions of an API and Publish them
2. Check from the developer portal and all the versions are shown even though `apim.devportal.display_multiple_versions` is set to false (default value)

| priority | developer portal shows all the versions of the api regardless of display multiple versions description originally reported by steps to reproduce create a few versions of an api and publish them check from the developer portal and all the versions are shown even though apim devportal display multiple versions is set to false default value | 1 |
751,559 | 26,250,128,771 | IssuesEvent | 2023-01-05 18:30:05 | AleoHQ/leo | https://api.github.com/repos/AleoHQ/leo | closed | [Feature] Clarify `input` Access Namescope | feature priority-high | ## Feature
This is a discussion on whether `input` access calls should be allowed outside the main function.
## Motivation
This impacts the decision between how the following issues are resolved:
* #1291
* #1297 | 1.0 | [Feature] Clarify `input` Access Namescope - ## Feature
This is a discussion on whether `input` access calls should be allowed outside the main function.
## Motivation
This impacts the decision between how the following issues are resolved:
* #1291
* #1297 | priority | clarify input access namescope feature this is a discussion on whether input access calls should be allowed outside the main function motivation this impacts the decision between how the following issues are resolved | 1 |
505,258 | 14,630,736,492 | IssuesEvent | 2020-12-23 18:20:35 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | ewaybillgst.gov.in - desktop site instead of mobile site | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal | <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64224 -->
**URL**: https://ewaybillgst.gov.in/CustomError.aspx
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/6fe45ae8-e952-4ba8-a1fe-a6d009e62c34.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201105203649</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/833c37db-97c5-45e7-9a85-a82fb2966ce6)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | ewaybillgst.gov.in - desktop site instead of mobile site - <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64224 -->
**URL**: https://ewaybillgst.gov.in/CustomError.aspx
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/6fe45ae8-e952-4ba8-a1fe-a6d009e62c34.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201105203649</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/833c37db-97c5-45e7-9a85-a82fb2966ce6)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | ewaybillgst gov in desktop site instead of mobile site url browser version firefox operating system windows tested another browser no problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
796,042 | 28,097,371,779 | IssuesEvent | 2023-03-30 16:44:36 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | opened | Pipeline Import | priority/high feature/ds-pipelines | The only ability in v1 to create a Pipeline from the Dashboard.
- In #1031 there was a created `ImportPipelineButton` that just needs a modal
- Implement the Import Modal
Mocks: https://www.sketch.com/s/6c72b6b2-2c32-45ff-b07c-381d9d8c8267/a/Oebd322 | 1.0 | Pipeline Import - The only ability in v1 to create a Pipeline from the Dashboard.
- In #1031 there was a created `ImportPipelineButton` that just needs a modal
- Implement the Import Modal
Mocks: https://www.sketch.com/s/6c72b6b2-2c32-45ff-b07c-381d9d8c8267/a/Oebd322 | priority | pipeline import the only ability in to create a pipeline from the dashboard in there was a created importpipelinebutton that just needs a modal implement the import modal mocks | 1 |
579,692 | 17,196,766,636 | IssuesEvent | 2021-07-16 18:40:06 | opensearch-project/OpenSearch | https://api.github.com/repos/opensearch-project/OpenSearch | closed | Support rolling upgrades to OpenSearch 1.0 from ODFE 1.13 and ES 7.10.x | Meta Priority-High backwards-compatibility blocked feedback needed help wanted stalled | Upgrading your cluster from ElasticSearch to OpenSearch and OpenSearch Dashboards is just like upgrading your cluster between versions of ElasticSearch and Kibana. Specifically, OpenSearch supports rolling upgrades and restart upgrades from ElasticSearch 6.8.0 through ElasticSearch 7.10.2 and to OpenSearch 1.0. OpenSearch Dashboards supports restart upgrades from Kibana 6.8.0 through Kibana 7.10.2 and to OpenSearch Dashboards 1.0.
- [ ] OpenSearch Dashboards, see https://github.com/opensearch-project/OpenSearch-Dashboards/issues/334.
- [x] OpenSearch Plugins, see https://github.com/opensearch-project/opensearch-plugins/issues/12.
- [x] OpenSearch itself, see https://github.com/opensearch-project/OpenSearch/issues/640.
- [x] Upgrade/migration guide, https://github.com/opensearch-project/OpenSearch/issues/801
We have [asked the community](https://discuss.opendistrocommunity.dev/t/upgrade-path-to-opensearch/5788) regarding the upgrade path to OpenSearch. Please continue adding your comments.
**Additional context**
 | 1.0 | Support rolling upgrades to OpenSearch 1.0 from ODFE 1.13 and ES 7.10.x - Upgrading your cluster from ElasticSearch to OpenSearch and OpenSearch Dashboards is just like upgrading your cluster between versions of ElasticSearch and Kibana. Specifically, OpenSearch supports rolling upgrades and restart upgrades from ElasticSearch 6.8.0 through ElasticSearch 7.10.2 and to OpenSearch 1.0. OpenSearch Dashboards supports restart upgrades from Kibana 6.8.0 through Kibana 7.10.2 and to OpenSearch Dashboards 1.0.
- [ ] OpenSearch Dashboards, see https://github.com/opensearch-project/OpenSearch-Dashboards/issues/334.
- [x] OpenSearch Plugins, see https://github.com/opensearch-project/opensearch-plugins/issues/12.
- [x] OpenSearch itself, see https://github.com/opensearch-project/OpenSearch/issues/640.
- [x] Upgrade/migration guide, https://github.com/opensearch-project/OpenSearch/issues/801
We have [asked the community](https://discuss.opendistrocommunity.dev/t/upgrade-path-to-opensearch/5788) regarding the upgrade path to OpenSearch. Please continue adding your comments.
**Additional context**
 | priority | support rolling upgrades to opensearch from odfe and es x upgrading your cluster from elasticsearch to opensearch and opensearch dashboards is just like upgrading your cluster between versions of elasticsearch and kibana specifically opensearch supports rolling upgrades and restart upgrades from elasticsearch through elasticsearch and to opensearch opensearch dashboards supports restart upgrades from kibana through kibana and to opensearch dashboards opensearch dashboards see opensearch plugins see opensearch itself see upgrade migration guide we have regarding the upgrade path to opensearch please continue adding your comments additional context | 1 |
465,193 | 13,358,428,553 | IssuesEvent | 2020-08-31 11:41:01 | Abbassihraf/Portfolio-V2 | https://api.github.com/repos/Abbassihraf/Portfolio-V2 | closed | Projects functionality | Back-End Priority: High Status: Completed | ### Projects back-end
- [x] Populate the table with data
- [x] Functionality for displaying projects
- [x] Functionality for adding projects
- [x] Functionality for updating projects
- [x] Functionality for deleting projects
- [x] Security and refactor code
| 1.0 | Projects functionality - ### Projects back-end
- [x] Populate the table with data
- [x] Functionality for displaying projects
- [x] Functionality for adding projects
- [x] Functionality for updating projects
- [x] Functionality for deleting projects
- [x] Security and refactor code
| priority | projects functionality projects back end populate the table with data functionality for displaying projects functionality for adding projects functionality for updating projects functionality for deleting projects security and refactor code | 1 |
749,317 | 26,159,177,952 | IssuesEvent | 2022-12-31 08:03:43 | Rehachoudhary0/hotel_testing | https://api.github.com/repos/Rehachoudhary0/hotel_testing | closed | 🐛 Bug Report: Traveler >Order details refresh issues | bug app High priority | ### 👟 Reproduction steps
order has been delivered but showing pending in list .if there is in issues with API pls share the response with API name .
https://user-images.githubusercontent.com/85510636/208424670-34290844-df34-4a10-8510-5a584f54b6f9.mp4
### 👍 Expected behavior
according to status
### 👎 Actual Behavior
not working for some food not for every .
### ☎️ Log-in number
all
### 📲 User Type
Traveller - Primary
### 🎲 App version
Version 22.12.12+01
### 💻 Operating system
Android
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md) | 1.0 | 🐛 Bug Report: Traveler >Order details refresh issues - ### 👟 Reproduction steps
order has been delivered but showing pending in list .if there is in issues with API pls share the response with API name .
https://user-images.githubusercontent.com/85510636/208424670-34290844-df34-4a10-8510-5a584f54b6f9.mp4
### 👍 Expected behavior
according to status
### 👎 Actual Behavior
not working for some food not for every .
### ☎️ Log-in number
all
### 📲 User Type
Traveller - Primary
### 🎲 App version
Version 22.12.12+01
### 💻 Operating system
Android
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md) | priority | 🐛 bug report traveler order details refresh issues 👟 reproduction steps order has been delivered but showing pending in list if there is in issues with api pls share the response with api name 👍 expected behavior according to status 👎 actual behavior not working for some food not for every ☎️ log in number all 📲 user type traveller primary 🎲 app version version 💻 operating system android 👀 have you spent some time to check if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the code of conduct i have read the | 1 |
6,686 | 2,590,974,753 | IssuesEvent | 2015-02-18 22:13:45 | OpenSprites/OpenSprites | https://api.github.com/repos/OpenSprites/OpenSprites | opened | Remove permanant snow :P | bug high priority | It's all EdenStudio's fault, he set up the JS snow, but didn't get the timer set up. jk :P I think the snow is making the footer play up somehow. | 1.0 | Remove permanant snow :P - It's all EdenStudio's fault, he set up the JS snow, but didn't get the timer set up. jk :P I think the snow is making the footer play up somehow. | priority | remove permanant snow p it s all edenstudio s fault he set up the js snow but didn t get the timer set up jk p i think the snow is making the footer play up somehow | 1 |
682,158 | 23,334,987,236 | IssuesEvent | 2022-08-09 09:09:48 | wso2-extensions/apim-migration-resources | https://api.github.com/repos/wso2-extensions/apim-migration-resources | closed | [4.0.0 Migration Testing] Error when super tenant API is attached with a fault sequence | bug Priority/Highest APIM-4.0.0 | **Description:**
Getting below error when migrating from 3.0 - 4.0 when there is a super tenant API that has a fault sequence attached.
```
[2022-08-05 11:41:46,174] ERROR - ClassMediatorFactory Error loading class : org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler - Class not found
java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler cannot be found by synapse-core_2.1.7.wso2v227
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:423) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:415) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:155) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_281]
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:100) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractListMediatorFactory.addChildren(AbstractListMediatorFactory.java:53) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SequenceMediatorFactory.createSpecificMediator(SequenceMediatorFactory.java:87) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SynapseXMLConfigurationFactory.defineSequence(SynapseXMLConfigurationFactory.java:232) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.wso2.carbon.sequences.services.SequenceAdmin.addSequence(SequenceAdmin.java:412) [org.wso2.carbon.sequences_4.7.99.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence_aroundBody0(SequenceAdminServiceProxy.java:58) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence(SequenceAdminServiceProxy.java:54) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI_aroundBody96(APIGatewayAdmin.java:753) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI(APIGatewayAdmin.java:653) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup_aroundBody4(InMemoryAPIDeployer.java:170) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup(InMemoryAPIDeployer.java:148) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup_aroundBody2(GatewayStartupListener.java:121) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup(GatewayStartupListener.java:113) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway_aroundBody18(GatewayStartupListener.java:255) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0_aroundBody28(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run_aroundBody0(GatewayStartupListener.java:338) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run(GatewayStartupListener.java:335) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
[2022-08-05 11:41:46,246] WARN - SynapseXMLConfigurationFactory Sequence configuration: Test:v1.0--Fault cannot be built - Continue in fail-safe mode
org.apache.synapse.SynapseException: Error loading class : org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler - Class not found
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:105) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractListMediatorFactory.addChildren(AbstractListMediatorFactory.java:53) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SequenceMediatorFactory.createSpecificMediator(SequenceMediatorFactory.java:87) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SynapseXMLConfigurationFactory.defineSequence(SynapseXMLConfigurationFactory.java:232) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.wso2.carbon.sequences.services.SequenceAdmin.addSequence(SequenceAdmin.java:412) [org.wso2.carbon.sequences_4.7.99.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence_aroundBody0(SequenceAdminServiceProxy.java:58) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence(SequenceAdminServiceProxy.java:54) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI_aroundBody96(APIGatewayAdmin.java:753) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI(APIGatewayAdmin.java:653) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup_aroundBody4(InMemoryAPIDeployer.java:170) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup(InMemoryAPIDeployer.java:148) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup_aroundBody2(GatewayStartupListener.java:121) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup(GatewayStartupListener.java:113) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway_aroundBody18(GatewayStartupListener.java:255) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0_aroundBody28(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run_aroundBody0(GatewayStartupListener.java:338) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run(GatewayStartupListener.java:335) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler cannot be found by synapse-core_2.1.7.wso2v227
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:423) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:415) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:155) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_281]
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:100) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
... 23 more
```
This is because in the below code we remove tenants with isActive set as false.
```
List<Tenant> tenants = APIUtil.getAllTenantsWithSuperTenant();
tenants.removeIf(t -> (!t.isActive()));
```
As per the implementation of getAllTenantsWithSuperTenant in APIUtil we are not setting active status for super tenant when adding it. Therefore `carbon.super` gets removed from the tenant list and the fault sequence modification are not applied.
This is fixed in master branch by removing the logic to remove inactive tenants. Or else we can fix the APIUtil method to set super tenant status as well but that would require a product patch.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
1. Create an API in 3.0 and attach a fault sequence to it. (a common sequence would do)
2. Migrate to 4.0.0 and the above error will pop up during APIM migration.
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | [4.0.0 Migration Testing] Error when super tenant API is attached with a fault sequence - **Description:**
Getting below error when migrating from 3.0 - 4.0 when there is a super tenant API that has a fault sequence attached.
```
[2022-08-05 11:41:46,174] ERROR - ClassMediatorFactory Error loading class : org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler - Class not found
java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler cannot be found by synapse-core_2.1.7.wso2v227
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:423) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:415) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:155) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_281]
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:100) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractListMediatorFactory.addChildren(AbstractListMediatorFactory.java:53) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SequenceMediatorFactory.createSpecificMediator(SequenceMediatorFactory.java:87) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SynapseXMLConfigurationFactory.defineSequence(SynapseXMLConfigurationFactory.java:232) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.wso2.carbon.sequences.services.SequenceAdmin.addSequence(SequenceAdmin.java:412) [org.wso2.carbon.sequences_4.7.99.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence_aroundBody0(SequenceAdminServiceProxy.java:58) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence(SequenceAdminServiceProxy.java:54) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI_aroundBody96(APIGatewayAdmin.java:753) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI(APIGatewayAdmin.java:653) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup_aroundBody4(InMemoryAPIDeployer.java:170) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup(InMemoryAPIDeployer.java:148) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup_aroundBody2(GatewayStartupListener.java:121) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup(GatewayStartupListener.java:113) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway_aroundBody18(GatewayStartupListener.java:255) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0_aroundBody28(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run_aroundBody0(GatewayStartupListener.java:338) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run(GatewayStartupListener.java:335) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
[2022-08-05 11:41:46,246] WARN - SynapseXMLConfigurationFactory Sequence configuration: Test:v1.0--Fault cannot be built - Continue in fail-safe mode
org.apache.synapse.SynapseException: Error loading class : org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler - Class not found
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:105) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractListMediatorFactory.addChildren(AbstractListMediatorFactory.java:53) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SequenceMediatorFactory.createSpecificMediator(SequenceMediatorFactory.java:87) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.AbstractMediatorFactory.createMediator(AbstractMediatorFactory.java:98) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.MediatorFactoryFinder.getMediator(MediatorFactoryFinder.java:251) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.apache.synapse.config.xml.SynapseXMLConfigurationFactory.defineSequence(SynapseXMLConfigurationFactory.java:232) [synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
at org.wso2.carbon.sequences.services.SequenceAdmin.addSequence(SequenceAdmin.java:412) [org.wso2.carbon.sequences_4.7.99.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence_aroundBody0(SequenceAdminServiceProxy.java:58) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.utils.SequenceAdminServiceProxy.addSequence(SequenceAdminServiceProxy.java:54) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI_aroundBody96(APIGatewayAdmin.java:753) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.service.APIGatewayAdmin.deployAPI(APIGatewayAdmin.java:653) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup_aroundBody4(InMemoryAPIDeployer.java:170) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.InMemoryAPIDeployer.deployAllAPIsAtGatewayStartup(InMemoryAPIDeployer.java:148) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup_aroundBody2(GatewayStartupListener.java:121) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsAtStartup(GatewayStartupListener.java:113) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway_aroundBody18(GatewayStartupListener.java:255) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.deployArtifactsInGateway(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0_aroundBody28(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener.access$0(GatewayStartupListener.java:244) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run_aroundBody0(GatewayStartupListener.java:338) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at org.wso2.carbon.apimgt.gateway.listeners.GatewayStartupListener$AsyncAPIDeployment.run(GatewayStartupListener.java:335) [org.wso2.carbon.apimgt.gateway_9.0.174.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]
Caused by: java.lang.ClassNotFoundException: org.wso2.carbon.apimgt.gateway.handlers.analytics.APIMgtFaultHandler cannot be found by synapse-core_2.1.7.wso2v227
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:461) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:423) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:415) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:155) ~[org.eclipse.osgi_3.14.0.v20190517-1309.jar:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_281]
at org.apache.synapse.config.xml.ClassMediatorFactory.createSpecificMediator(ClassMediatorFactory.java:100) ~[synapse-core_2.1.7.wso2v227.jar:2.1.7-wso2v227]
... 23 more
```
This is because in the below code we remove tenants with isActive set as false.
```
List<Tenant> tenants = APIUtil.getAllTenantsWithSuperTenant();
tenants.removeIf(t -> (!t.isActive()));
```
As per the implementation of getAllTenantsWithSuperTenant in APIUtil we are not setting active status for super tenant when adding it. Therefore `carbon.super` gets removed from the tenant list and the fault sequence modification are not applied.
This is fixed in master branch by removing the logic to remove inactive tenants. Or else we can fix the APIUtil method to set super tenant status as well but that would require a product patch.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
1. Create an API in 3.0 and attach a fault sequence to it. (a common sequence would do)
2. Migrate to 4.0.0 and the above error will pop up during APIM migration.
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | error when super tenant api is attached with a fault sequence description getting below error when migrating from when there is a super tenant api that has a fault sequence attached error classmediatorfactory error loading class org carbon apimgt gateway handlers analytics apimgtfaulthandler class not found java lang classnotfoundexception org carbon apimgt gateway handlers analytics apimgtfaulthandler cannot be found by synapse core at org eclipse osgi internal loader bundleloader findclassinternal bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass classloader java at org apache synapse config xml classmediatorfactory createspecificmediator classmediatorfactory java at org apache synapse config xml abstractmediatorfactory createmediator abstractmediatorfactory java at org apache synapse config xml mediatorfactoryfinder getmediator mediatorfactoryfinder java at org apache synapse config xml abstractlistmediatorfactory addchildren abstractlistmediatorfactory java at org apache synapse config xml sequencemediatorfactory createspecificmediator sequencemediatorfactory java at org apache synapse config xml abstractmediatorfactory createmediator abstractmediatorfactory java at org apache synapse config xml mediatorfactoryfinder getmediator mediatorfactoryfinder java at org apache synapse config xml synapsexmlconfigurationfactory definesequence synapsexmlconfigurationfactory java at org carbon sequences services sequenceadmin addsequence sequenceadmin java at org carbon apimgt gateway utils sequenceadminserviceproxy addsequence sequenceadminserviceproxy java at org carbon apimgt gateway utils sequenceadminserviceproxy addsequence sequenceadminserviceproxy java at org carbon apimgt gateway service apigatewayadmin deployapi apigatewayadmin java at org carbon apimgt gateway service apigatewayadmin deployapi apigatewayadmin java at org carbon apimgt gateway inmemoryapideployer deployallapisatgatewaystartup inmemoryapideployer java at org carbon apimgt gateway inmemoryapideployer deployallapisatgatewaystartup inmemoryapideployer java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsatstartup gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsatstartup gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsingateway gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsingateway gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener access gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener access gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener asyncapideployment run gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener asyncapideployment run gatewaystartuplistener java at java lang thread run thread java warn synapsexmlconfigurationfactory sequence configuration test fault cannot be built continue in fail safe mode org apache synapse synapseexception error loading class org carbon apimgt gateway handlers analytics apimgtfaulthandler class not found at org apache synapse config xml classmediatorfactory createspecificmediator classmediatorfactory java at org apache synapse config xml abstractmediatorfactory createmediator abstractmediatorfactory java at org apache synapse config xml mediatorfactoryfinder getmediator mediatorfactoryfinder java at org apache synapse config xml abstractlistmediatorfactory addchildren abstractlistmediatorfactory java at org apache synapse config xml sequencemediatorfactory createspecificmediator sequencemediatorfactory java at org apache synapse config xml abstractmediatorfactory createmediator abstractmediatorfactory java at org apache synapse config xml mediatorfactoryfinder getmediator mediatorfactoryfinder java at org apache synapse config xml synapsexmlconfigurationfactory definesequence synapsexmlconfigurationfactory java at org carbon sequences services sequenceadmin addsequence sequenceadmin java at org carbon apimgt gateway utils sequenceadminserviceproxy addsequence sequenceadminserviceproxy java at org carbon apimgt gateway utils sequenceadminserviceproxy addsequence sequenceadminserviceproxy java at org carbon apimgt gateway service apigatewayadmin deployapi apigatewayadmin java at org carbon apimgt gateway service apigatewayadmin deployapi apigatewayadmin java at org carbon apimgt gateway inmemoryapideployer deployallapisatgatewaystartup inmemoryapideployer java at org carbon apimgt gateway inmemoryapideployer deployallapisatgatewaystartup inmemoryapideployer java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsatstartup gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsatstartup gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsingateway gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener deployartifactsingateway gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener access gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener access gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener asyncapideployment run gatewaystartuplistener java at org carbon apimgt gateway listeners gatewaystartuplistener asyncapideployment run gatewaystartuplistener java at java lang thread run thread java caused by java lang classnotfoundexception org carbon apimgt gateway handlers analytics apimgtfaulthandler cannot be found by synapse core at org eclipse osgi internal loader bundleloader findclassinternal bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader bundleloader findclass bundleloader java at org eclipse osgi internal loader moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass classloader java at org apache synapse config xml classmediatorfactory createspecificmediator classmediatorfactory java more this is because in the below code we remove tenants with isactive set as false list tenants apiutil getalltenantswithsupertenant tenants removeif t t isactive as per the implementation of getalltenantswithsupertenant in apiutil we are not setting active status for super tenant when adding it therefore carbon super gets removed from the tenant list and the fault sequence modification are not applied this is fixed in master branch by removing the logic to remove inactive tenants or else we can fix the apiutil method to set super tenant status as well but that would require a product patch suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce create an api in and attach a fault sequence to it a common sequence would do migrate to and the above error will pop up during apim migration related issues | 1 |
616,252 | 19,297,343,751 | IssuesEvent | 2021-12-12 20:06:39 | CAKES-coding/swe574-group2 | https://api.github.com/repos/CAKES-coding/swe574-group2 | opened | Refactor activity and annotation models | enhancement backend effort: 1 priority: High | There are warnings for activity and annotation model.
- Deprecated JSON field abject needs to be changed.
- Redundant parameters from activity fields needs to be removed.
```
WARNINGS:
wikodeApp.Activity.activity_JSON: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
wikodeApp.Activity.target_id: (fields.W122) 'max_length' is ignored when used with IntegerField.
HINT: Remove 'max_length' from field
wikodeApp.Activity.user_id: (fields.W122) 'max_length' is ignored when used with IntegerField.
HINT: Remove 'max_length' from field
wikodeApp.Annotation.annotation_JSON: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
```
- User is held as an interger it needs to be a one to many relation from User model.
```python
user_id = models.IntegerField(max_length=8)
```
| 1.0 | Refactor activity and annotation models - There are warnings for activity and annotation model.
- Deprecated JSON field abject needs to be changed.
- Redundant parameters from activity fields needs to be removed.
```
WARNINGS:
wikodeApp.Activity.activity_JSON: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
wikodeApp.Activity.target_id: (fields.W122) 'max_length' is ignored when used with IntegerField.
HINT: Remove 'max_length' from field
wikodeApp.Activity.user_id: (fields.W122) 'max_length' is ignored when used with IntegerField.
HINT: Remove 'max_length' from field
wikodeApp.Annotation.annotation_JSON: (fields.W904) django.contrib.postgres.fields.JSONField is deprecated. Support for it (except in historical migrations) will be removed in Django 4.0.
HINT: Use django.db.models.JSONField instead.
```
- User is held as an interger it needs to be a one to many relation from User model.
```python
user_id = models.IntegerField(max_length=8)
```
| priority | refactor activity and annotation models there are warnings for activity and annotation model deprecated json field abject needs to be changed redundant parameters from activity fields needs to be removed warnings wikodeapp activity activity json fields django contrib postgres fields jsonfield is deprecated support for it except in historical migrations will be removed in django hint use django db models jsonfield instead wikodeapp activity target id fields max length is ignored when used with integerfield hint remove max length from field wikodeapp activity user id fields max length is ignored when used with integerfield hint remove max length from field wikodeapp annotation annotation json fields django contrib postgres fields jsonfield is deprecated support for it except in historical migrations will be removed in django hint use django db models jsonfield instead user is held as an interger it needs to be a one to many relation from user model python user id models integerfield max length | 1 |
199,658 | 6,992,905,864 | IssuesEvent | 2017-12-15 09:14:09 | giangm9/enduel | https://api.github.com/repos/giangm9/enduel | closed | Thiết kế lại tìm được trận lúc ấn ACCEPT -> ACCEPTED | High priority | để ACCECT bthg gây bối rối ko biết ấn được chưa
khi ấn được chuyển thành accepted và có nếu chữ waiting for opponent thì tốt | 1.0 | Thiết kế lại tìm được trận lúc ấn ACCEPT -> ACCEPTED - để ACCECT bthg gây bối rối ko biết ấn được chưa
khi ấn được chuyển thành accepted và có nếu chữ waiting for opponent thì tốt | priority | thiết kế lại tìm được trận lúc ấn accept accepted để accect bthg gây bối rối ko biết ấn được chưa khi ấn được chuyển thành accepted và có nếu chữ waiting for opponent thì tốt | 1 |
677,991 | 23,182,299,719 | IssuesEvent | 2022-08-01 04:16:11 | okTurtles/group-income | https://api.github.com/repos/okTurtles/group-income | closed | Conflicting strings (French) | Kind:Bug Level:Starter Priority:High Note:Accessibility Note:UI/UX | ### Problem
In merging @snowteamer's latest changes into my PR (#1250), the following string conflicts in `french.strings` were detected:
```
<<<<<<< HEAD
"Loading events for '{contract}' from server..." = "Loading events for '{contract}' from server...";
=======
"Loading events from server..." = "Synchronisation avec le serveur...";
>>>>>>> master
```
I was forced to remove the translation as it no longer applies.
### Solution
Either comment on my PR with the correct translation for the new string, or fix after my PR is merged.
The new string is the one at the top (`"Loading events for '{contract}' from server..."`)
EDIT: so this string (for UX reasons) has been changed completely to simply: `Loading events from server...` | 1.0 | Conflicting strings (French) - ### Problem
In merging @snowteamer's latest changes into my PR (#1250), the following string conflicts in `french.strings` were detected:
```
<<<<<<< HEAD
"Loading events for '{contract}' from server..." = "Loading events for '{contract}' from server...";
=======
"Loading events from server..." = "Synchronisation avec le serveur...";
>>>>>>> master
```
I was forced to remove the translation as it no longer applies.
### Solution
Either comment on my PR with the correct translation for the new string, or fix after my PR is merged.
The new string is the one at the top (`"Loading events for '{contract}' from server..."`)
EDIT: so this string (for UX reasons) has been changed completely to simply: `Loading events from server...` | priority | conflicting strings french problem in merging snowteamer s latest changes into my pr the following string conflicts in french strings were detected head loading events for contract from server loading events for contract from server loading events from server synchronisation avec le serveur master i was forced to remove the translation as it no longer applies solution either comment on my pr with the correct translation for the new string or fix after my pr is merged the new string is the one at the top loading events for contract from server edit so this string for ux reasons has been changed completely to simply loading events from server | 1 |
370,066 | 10,924,841,354 | IssuesEvent | 2019-11-22 11:02:09 | bounswe/bounswe2019group10 | https://api.github.com/repos/bounswe/bounswe2019group10 | closed | Change the writing model and writingResult model | Priority: High Relation: Backend Type: Enhancement | Writing model should be changed so that it also holds name of the writing.
Writing model should include a field that indicates whether or not given writing is scored by an evaluator yet. | 1.0 | Change the writing model and writingResult model - Writing model should be changed so that it also holds name of the writing.
Writing model should include a field that indicates whether or not given writing is scored by an evaluator yet. | priority | change the writing model and writingresult model writing model should be changed so that it also holds name of the writing writing model should include a field that indicates whether or not given writing is scored by an evaluator yet | 1 |
228,622 | 7,564,888,692 | IssuesEvent | 2018-04-21 02:48:50 | EvictionLab/eviction-maps | https://api.github.com/repos/EvictionLab/eviction-maps | closed | Error when bar graph is default selection | bug high priority | Getting errors when loading a link that has bar graph set as the default:

| 1.0 | Error when bar graph is default selection - Getting errors when loading a link that has bar graph set as the default:

| priority | error when bar graph is default selection getting errors when loading a link that has bar graph set as the default | 1 |
370,611 | 10,934,662,045 | IssuesEvent | 2019-11-24 13:19:14 | bounswe/bounswe2019group8 | https://api.github.com/repos/bounswe/bounswe2019group8 | opened | Trading Equipment Screen Improment | Effort: High Mobile New feature Platform: Mobile Priority: High Status: In Progress | **Actions:**
1. Add trading equipment show daily price in chart.
1. Add zoom in chart screen in landscape mode.
1. Add ask and bid price to trading equipments.
**Notes:**
- [ ] Add trading equipment show daily price in chart.
- [ ] Add zoom in chart screen in landscape mode.
- [ ] Add ask and bid price to trading equipments.
**Deadline:** 25.10.2019 - 21.43 | 1.0 | Trading Equipment Screen Improment - **Actions:**
1. Add trading equipment show daily price in chart.
1. Add zoom in chart screen in landscape mode.
1. Add ask and bid price to trading equipments.
**Notes:**
- [ ] Add trading equipment show daily price in chart.
- [ ] Add zoom in chart screen in landscape mode.
- [ ] Add ask and bid price to trading equipments.
**Deadline:** 25.10.2019 - 21.43 | priority | trading equipment screen improment actions add trading equipment show daily price in chart add zoom in chart screen in landscape mode add ask and bid price to trading equipments notes add trading equipment show daily price in chart add zoom in chart screen in landscape mode add ask and bid price to trading equipments deadline | 1 |
491,984 | 14,174,770,738 | IssuesEvent | 2020-11-12 20:27:00 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | opened | No longer getting GSV depth data | Audit Priority: Very High bug | As of about 3 days ago, it seems that we've started receiving 404 errors on any requests for depth data for GSV panoramas. This is happening both on the main Sidewalk websites and for the pano scraper.
The end result on the main website is that we are unable to compute latitude and longitude when someone places a label, which means that labels are not showing up on maps and such. For the pano scraper, this means that the download of the XML that contains both the metadata and the depth data is failing.
What's still working is that the audit interface seems to work normally. Everything that does not explicitly require lat/lngs seems to be working. This basically means that new labels are not being shown on maps and can't be included in nightly clustering, but the audit and validation interfaces seem to be functioning normally. In addition, the downloading of the panorama images is working fine in the pano scraper.
If at some point we are able to get the depth data for the panos that have new labels on them, we should be able to write some custom code to fill in the lat/lng values in the table, since we save all the other info we would need to compute the lat/lng, provided we have associated depth data.
I have yet to find anyone talking about this and a potential fix online. Additionally, this is not a part of GSV's formal API, which means that they really could just remove it at any time. And that is what I'm worried about right now.
@jonfroehlich I think the immediate question is whether we should make the audit page "closed for maintenance" for now. There is quite a bit of data coming in right now in both SPGG and Seattle. If we are able to acquire the depth data soon and backfill the tables, then I would hate to halt people's momentum by closing down the audit page. But if we are never able to get the depth data back and we have to completely rethink the audit interface, then we'll be wasting a lot of peoples' efforts in the near future. | 1.0 | No longer getting GSV depth data - As of about 3 days ago, it seems that we've started receiving 404 errors on any requests for depth data for GSV panoramas. This is happening both on the main Sidewalk websites and for the pano scraper.
The end result on the main website is that we are unable to compute latitude and longitude when someone places a label, which means that labels are not showing up on maps and such. For the pano scraper, this means that the download of the XML that contains both the metadata and the depth data is failing.
What's still working is that the audit interface seems to work normally. Everything that does not explicitly require lat/lngs seems to be working. This basically means that new labels are not being shown on maps and can't be included in nightly clustering, but the audit and validation interfaces seem to be functioning normally. In addition, the downloading of the panorama images is working fine in the pano scraper.
If at some point we are able to get the depth data for the panos that have new labels on them, we should be able to write some custom code to fill in the lat/lng values in the table, since we save all the other info we would need to compute the lat/lng, provided we have associated depth data.
I have yet to find anyone talking about this and a potential fix online. Additionally, this is not a part of GSV's formal API, which means that they really could just remove it at any time. And that is what I'm worried about right now.
@jonfroehlich I think the immediate question is whether we should make the audit page "closed for maintenance" for now. There is quite a bit of data coming in right now in both SPGG and Seattle. If we are able to acquire the depth data soon and backfill the tables, then I would hate to halt people's momentum by closing down the audit page. But if we are never able to get the depth data back and we have to completely rethink the audit interface, then we'll be wasting a lot of peoples' efforts in the near future. | priority | no longer getting gsv depth data as of about days ago it seems that we ve started receiving errors on any requests for depth data for gsv panoramas this is happening both on the main sidewalk websites and for the pano scraper the end result on the main website is that we are unable to compute latitude and longitude when someone places a label which means that labels are not showing up on maps and such for the pano scraper this means that the download of the xml that contains both the metadata and the depth data is failing what s still working is that the audit interface seems to work normally everything that does not explicitly require lat lngs seems to be working this basically means that new labels are not being shown on maps and can t be included in nightly clustering but the audit and validation interfaces seem to be functioning normally in addition the downloading of the panorama images is working fine in the pano scraper if at some point we are able to get the depth data for the panos that have new labels on them we should be able to write some custom code to fill in the lat lng values in the table since we save all the other info we would need to compute the lat lng provided we have associated depth data i have yet to find anyone talking about this and a potential fix online additionally this is not a part of gsv s formal api which means that they really could just remove it at any time and that is what i m worried about right now jonfroehlich i think the immediate question is whether we should make the audit page closed for maintenance for now there is quite a bit of data coming in right now in both spgg and seattle if we are able to acquire the depth data soon and backfill the tables then i would hate to halt people s momentum by closing down the audit page but if we are never able to get the depth data back and we have to completely rethink the audit interface then we ll be wasting a lot of peoples efforts in the near future | 1 |
629,322 | 20,029,296,355 | IssuesEvent | 2022-02-02 02:16:30 | oncokb/oncokb | https://api.github.com/repos/oncokb/oncokb | opened | Opt out tracking token usage | high priority | We need a method to opt out tracking token usage for certain users due to the nature of their jobs. | 1.0 | Opt out tracking token usage - We need a method to opt out tracking token usage for certain users due to the nature of their jobs. | priority | opt out tracking token usage we need a method to opt out tracking token usage for certain users due to the nature of their jobs | 1 |
522,142 | 15,158,053,988 | IssuesEvent | 2021-02-12 00:13:40 | NOAA-GSL/MATS | https://api.github.com/repos/NOAA-GSL/MATS | closed | Reported by Dave: MATS plotting fails for extremely large amounts of data | Priority: High Project: MATS Status: Closed Type: Task | ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 56450, https://vlab.ncep.noaa.gov/redmine/issues/56450
Original Date: 2018-10-18
Original Assignee: randy.pierce
---
MATS stores its results in Mongo, which apparently has a 16 Mb limit on what you can put in collections, so MATS is failing on graphs longer than about 6 years of hourly data. Randy suggests replacing Mongo storage with file caching.
| 1.0 | Reported by Dave: MATS plotting fails for extremely large amounts of data - ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 56450, https://vlab.ncep.noaa.gov/redmine/issues/56450
Original Date: 2018-10-18
Original Assignee: randy.pierce
---
MATS stores its results in Mongo, which apparently has a 16 Mb limit on what you can put in collections, so MATS is failing on graphs longer than about 6 years of hourly data. Randy suggests replacing Mongo storage with file caching.
| priority | reported by dave mats plotting fails for extremely large amounts of data author name molly b smith mollybsmith noaa original redmine issue original date original assignee randy pierce mats stores its results in mongo which apparently has a mb limit on what you can put in collections so mats is failing on graphs longer than about years of hourly data randy suggests replacing mongo storage with file caching | 1 |
213,187 | 7,246,528,480 | IssuesEvent | 2018-02-14 21:59:47 | feathersjs-ecosystem/feathers-generator | https://api.github.com/repos/feathersjs-ecosystem/feathers-generator | closed | elasticsearch adapter | Priority: High Status: Available Type: Enhancement | ### overview
implement a [elasticsearch](https://github.com/feathersjs/feathers-elasticsearch/) adapter
### tasks
- [ ] add model choice
- [ ] add model choice dependencies
- [ ] add model templates
- [ ] test that it works | 1.0 | elasticsearch adapter - ### overview
implement a [elasticsearch](https://github.com/feathersjs/feathers-elasticsearch/) adapter
### tasks
- [ ] add model choice
- [ ] add model choice dependencies
- [ ] add model templates
- [ ] test that it works | priority | elasticsearch adapter overview implement a adapter tasks add model choice add model choice dependencies add model templates test that it works | 1 |
50,930 | 3,008,185,857 | IssuesEvent | 2015-07-27 19:55:07 | IntellectualSites/PlotSquared | https://api.github.com/repos/IntellectualSites/PlotSquared | closed | WorldEdit masking issues (unknown cause / cannot replicate) | [!] Bug [‼] high priority [✖] Cannot Replicate | I'm hum problem , players with WorldEdit can USE sphere of command and accentuate as Streets | 1.0 | WorldEdit masking issues (unknown cause / cannot replicate) - I'm hum problem , players with WorldEdit can USE sphere of command and accentuate as Streets | priority | worldedit masking issues unknown cause cannot replicate i m hum problem players with worldedit can use sphere of command and accentuate as streets | 1 |
534,599 | 15,630,210,992 | IssuesEvent | 2021-03-22 01:41:43 | tuanngo1001/tubtrunk | https://api.github.com/repos/tuanngo1001/tubtrunk | reopened | [USER] As a user, I want to view and trade items in the store with my rewards. | High Priority Medium Risk user story | - **Points**: 2
- Related to feature #5
Tasks:
- [x] Add Store Controller
- [x] Display store items
- [x] Implement Store Item Models
- [x] Display popup when buying store items
- [x] Items can be purchased using rewards | 1.0 | [USER] As a user, I want to view and trade items in the store with my rewards. - - **Points**: 2
- Related to feature #5
Tasks:
- [x] Add Store Controller
- [x] Display store items
- [x] Implement Store Item Models
- [x] Display popup when buying store items
- [x] Items can be purchased using rewards | priority | as a user i want to view and trade items in the store with my rewards points related to feature tasks add store controller display store items implement store item models display popup when buying store items items can be purchased using rewards | 1 |
282,707 | 8,709,805,948 | IssuesEvent | 2018-12-06 14:55:21 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Filter Events interface fails to open | Misc: Bug Priority: High | ### Expected behavior
FilterEvents window should open
### Actual behavior
`ValueError: substring not found at line 12 in '<Interface>' caused by line 19 in 'mantid/scripts/FilterEvents/__init__.py'` error
And windows does not open
### Steps to reproduce the behavior
MantidPlot -> Interfaces -> Utility -> FilterEvents
### Platforms affected
Would assume all
Only tested on Ubuntu 18 and 16, Debug and Release mode
| 1.0 | Filter Events interface fails to open - ### Expected behavior
FilterEvents window should open
### Actual behavior
`ValueError: substring not found at line 12 in '<Interface>' caused by line 19 in 'mantid/scripts/FilterEvents/__init__.py'` error
And windows does not open
### Steps to reproduce the behavior
MantidPlot -> Interfaces -> Utility -> FilterEvents
### Platforms affected
Would assume all
Only tested on Ubuntu 18 and 16, Debug and Release mode
| priority | filter events interface fails to open expected behavior filterevents window should open actual behavior valueerror substring not found at line in caused by line in mantid scripts filterevents init py error and windows does not open steps to reproduce the behavior mantidplot interfaces utility filterevents platforms affected would assume all only tested on ubuntu and debug and release mode | 1 |
226,804 | 7,523,186,074 | IssuesEvent | 2018-04-12 23:29:40 | Fireboyd78/mm2hook | https://api.github.com/repos/Fireboyd78/mm2hook | opened | [Optional] Improvements to use of night textures and headlights/streetlights. | RV6 Roadmap enhancement help wanted high priority | - [ ] Allow night textures to be used in the evening.
- [ ] Use headlights in every condition except noon clear and noon cloudy, street lights in evening. | 1.0 | [Optional] Improvements to use of night textures and headlights/streetlights. - - [ ] Allow night textures to be used in the evening.
- [ ] Use headlights in every condition except noon clear and noon cloudy, street lights in evening. | priority | improvements to use of night textures and headlights streetlights allow night textures to be used in the evening use headlights in every condition except noon clear and noon cloudy street lights in evening | 1 |
502,069 | 14,539,670,571 | IssuesEvent | 2020-12-15 12:13:44 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | cw being charged twice at purchase time | Category: Cloud Worlds Priority: High Status: Fixed | 
In screenshot, subscription info shows that it paid $15 start up fee.
And it seems to imply that '1 cycle' had completed and charged another $15 subscription fee.
Users was charged $15 twice, on day of purchase. | 1.0 | cw being charged twice at purchase time - 
In screenshot, subscription info shows that it paid $15 start up fee.
And it seems to imply that '1 cycle' had completed and charged another $15 subscription fee.
Users was charged $15 twice, on day of purchase. | priority | cw being charged twice at purchase time in screenshot subscription info shows that it paid start up fee and it seems to imply that cycle had completed and charged another subscription fee users was charged twice on day of purchase | 1 |
753,231 | 26,341,987,478 | IssuesEvent | 2023-01-10 18:27:57 | devtobi/template-repo | https://api.github.com/repos/devtobi/template-repo | closed | Add a good wiki | Priority: High Status: Completed Type: User Story | A good wiki like https://github.com/flutter/flutter/wiki with the minimal common pages has to be added:
tasklist:
- [x] add good home page
- [x] add custom footer with links to (Support, Contributing, Code of Conduct, Github Pages, Funding, Security)
- [x] add custom sidebar with placeholder image and links to (Contributing, Support, Code of Conduct, Roadmap)
- [x] add wiki content | 1.0 | Add a good wiki - A good wiki like https://github.com/flutter/flutter/wiki with the minimal common pages has to be added:
tasklist:
- [x] add good home page
- [x] add custom footer with links to (Support, Contributing, Code of Conduct, Github Pages, Funding, Security)
- [x] add custom sidebar with placeholder image and links to (Contributing, Support, Code of Conduct, Roadmap)
- [x] add wiki content | priority | add a good wiki a good wiki like with the minimal common pages has to be added tasklist add good home page add custom footer with links to support contributing code of conduct github pages funding security add custom sidebar with placeholder image and links to contributing support code of conduct roadmap add wiki content | 1 |
364,208 | 10,760,511,621 | IssuesEvent | 2019-10-31 18:45:16 | DevAdventCalendar/DevAdventCalendar | https://api.github.com/repos/DevAdventCalendar/DevAdventCalendar | closed | Fix newsletter validation messages formatting | bug high priority | Newsletter validation messages should not break displaying of the GDPR checkbox.
**Screenshots**

| 1.0 | Fix newsletter validation messages formatting - Newsletter validation messages should not break displaying of the GDPR checkbox.
**Screenshots**

| priority | fix newsletter validation messages formatting newsletter validation messages should not break displaying of the gdpr checkbox screenshots | 1 |
419,591 | 12,225,323,619 | IssuesEvent | 2020-05-03 04:28:01 | findthemasks/findthemasks | https://api.github.com/repos/findthemasks/findthemasks | closed | Add a Sheet column that records the row-ID at the instant a new row enters Combined | data moderation high-priority | As we prepare for MailChimp and Airtable migration, we're going to need to retain the row ID as we move between platforms. This row-ID is, unfortunately, different from "Source Row", which we already have and would like to keep.
The immediate need is to be able to include the row-ID in MailChimp email-subject lines. We would like to be able to use this column by Sunday (5/2) night. | 1.0 | Add a Sheet column that records the row-ID at the instant a new row enters Combined - As we prepare for MailChimp and Airtable migration, we're going to need to retain the row ID as we move between platforms. This row-ID is, unfortunately, different from "Source Row", which we already have and would like to keep.
The immediate need is to be able to include the row-ID in MailChimp email-subject lines. We would like to be able to use this column by Sunday (5/2) night. | priority | add a sheet column that records the row id at the instant a new row enters combined as we prepare for mailchimp and airtable migration we re going to need to retain the row id as we move between platforms this row id is unfortunately different from source row which we already have and would like to keep the immediate need is to be able to include the row id in mailchimp email subject lines we would like to be able to use this column by sunday night | 1 |
230,482 | 7,610,808,541 | IssuesEvent | 2018-05-01 10:30:19 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Incorrect angle used in conversion to Q in RRO | Component: Reflectometry Misc: Bug Priority: High | In `ReflectometryReductionOne2`, when we do summation in lambda we use [GroupDetectors](https://github.com/mantidproject/mantid/blob/master/Framework/Algorithms/src/ReflectometryWorkflowBase2.cpp#L408) to sum in lambda across a set of detectors of interest.
The resulting summed workspace has a single spectrum with detector angle taken from the centre of the detectors of interest. The range of detectors of interest is often centered on the scattering angle `twoTheta`, but not always, so the final detector angle is somewhat arbitrary.
This angle gets used by `ConvertUnits` in the conversion to `Q` and therefore we get different X values depending on the set of detectors used. Instead, the correct angle to use in the conversion to `Q` is half the scattering angle `twoTheta`. Even when the range of detectors is notionally centered on the "twoTheta pixel", the pixel's centre may not be exactly at `twoTheta` anyway, so the angle is still slightly wrong.
There are a few approaches we could take, but from discussion with scientists we have agreed to correct the position of the detector group as an extra step in `ReflectometryReductionOne`. It should be corrected to the angle provided by the `ThetaIn` property, if this is given. If `ThetaIn` is not given, no correction should be done. Ideally this correction would be done in a separate algorithm because it would also be useful for ILL.
Note that this should only be done when summing in lambda. The key steps of the `ReflectometryReductionOne` algorithm when summing in lambda start [here](https://github.com/mantidproject/mantid/blob/master/Framework/Algorithms/src/ReflectometryReductionOne2.cpp#L606). `makeDetectorWS` calls `GroupDetectors`. Then some normalisations by monitors and transmission runs are done. The new correction should be done after these normalisations, before the result of `makeIvsLam` is returned. The result of this function is then converted to Q using `ConvertUnits`.
Note that we have an algorithm called `SpecularReflectionPositionCorrect` which corrects detector positions, but this works on instrument component names so I don't think it can be used for a detector group. It may be useful to take a look at it, though.
## To test
```
import math
def printResults(name, thetaIn):
ws=mtd[name]
print "Expected theta: ", thetaIn
print "Actual theta: ", ws.detectorTwoTheta(ws.getDetector(0)) * 180 / math.pi / 2
# OFFSPEC example - TOF workspace detectors are NOT in the correct position on load so ARE corrected by RROA
LoadISISNexus(Filename='OFFSPEC44684', OutputWorkspace='TOF_44684')
LoadISISNexus(Filename='OFFSPEC44683', OutputWorkspace='TRANS_44683')
ReflectometryReductionOneAuto(InputWorkspace='TOF_44684', ProcessingInstructions='390-410', ThetaIn=0.3, CorrectDetectors='1', DetectorCorrectionType='RotateAroundSample', WavelengthMin=1.2, WavelengthMax=14, I0MonitorIndex=1, MonitorBackgroundWavelengthMin=15, MonitorBackgroundWavelengthMax=20, MonitorIntegrationWavelengthMin=2, MonitorIntegrationWavelengthMax=14, FirstTransmissionRun='TRANS_44683', MomentumTransferStep=0.02, ScaleFactor=1, OutputWorkspaceBinned='IvsQ_binned_44684', OutputWorkspace='IvsQ_44684', OutputWorkspaceWavelength='IvsLam_44684')
printResults('IvsQ_binned_44684', 0.3)
# INTER example - TOF workspace detectors ARE in the correct position on load so are NOT corrected by RROA
LoadISISNexus(Filename='INTER00043583', OutputWorkspace='TOF_43583')
ReflectometryReductionOneAuto(InputWorkspace='TOF_43583', CorrectDetectors='0', AnalysisMode='MultiDetectorAnalysis', ProcessingInstructions='45-69', ThetaIn=1.3, MomentumTransferStep=-0.0001, ScaleFactor=1, OutputWorkspaceBinned='IvsQ_binned_43583', OutputWorkspace='IvsQ_43583', OutputWorkspaceWavelength='IvsLam_43583')
printResults('IvsQ_binned_43583', 1.3)
``` | 1.0 | Incorrect angle used in conversion to Q in RRO - In `ReflectometryReductionOne2`, when we do summation in lambda we use [GroupDetectors](https://github.com/mantidproject/mantid/blob/master/Framework/Algorithms/src/ReflectometryWorkflowBase2.cpp#L408) to sum in lambda across a set of detectors of interest.
The resulting summed workspace has a single spectrum with detector angle taken from the centre of the detectors of interest. The range of detectors of interest is often centered on the scattering angle `twoTheta`, but not always, so the final detector angle is somewhat arbitrary.
This angle gets used by `ConvertUnits` in the conversion to `Q` and therefore we get different X values depending on the set of detectors used. Instead, the correct angle to use in the conversion to `Q` is half the scattering angle `twoTheta`. Even when the range of detectors is notionally centered on the "twoTheta pixel", the pixel's centre may not be exactly at `twoTheta` anyway, so the angle is still slightly wrong.
There are a few approaches we could take, but from discussion with scientists we have agreed to correct the position of the detector group as an extra step in `ReflectometryReductionOne`. It should be corrected to the angle provided by the `ThetaIn` property, if this is given. If `ThetaIn` is not given, no correction should be done. Ideally this correction would be done in a separate algorithm because it would also be useful for ILL.
Note that this should only be done when summing in lambda. The key steps of the `ReflectometryReductionOne` algorithm when summing in lambda start [here](https://github.com/mantidproject/mantid/blob/master/Framework/Algorithms/src/ReflectometryReductionOne2.cpp#L606). `makeDetectorWS` calls `GroupDetectors`. Then some normalisations by monitors and transmission runs are done. The new correction should be done after these normalisations, before the result of `makeIvsLam` is returned. The result of this function is then converted to Q using `ConvertUnits`.
Note that we have an algorithm called `SpecularReflectionPositionCorrect` which corrects detector positions, but this works on instrument component names so I don't think it can be used for a detector group. It may be useful to take a look at it, though.
## To test
```
import math
def printResults(name, thetaIn):
ws=mtd[name]
print "Expected theta: ", thetaIn
print "Actual theta: ", ws.detectorTwoTheta(ws.getDetector(0)) * 180 / math.pi / 2
# OFFSPEC example - TOF workspace detectors are NOT in the correct position on load so ARE corrected by RROA
LoadISISNexus(Filename='OFFSPEC44684', OutputWorkspace='TOF_44684')
LoadISISNexus(Filename='OFFSPEC44683', OutputWorkspace='TRANS_44683')
ReflectometryReductionOneAuto(InputWorkspace='TOF_44684', ProcessingInstructions='390-410', ThetaIn=0.3, CorrectDetectors='1', DetectorCorrectionType='RotateAroundSample', WavelengthMin=1.2, WavelengthMax=14, I0MonitorIndex=1, MonitorBackgroundWavelengthMin=15, MonitorBackgroundWavelengthMax=20, MonitorIntegrationWavelengthMin=2, MonitorIntegrationWavelengthMax=14, FirstTransmissionRun='TRANS_44683', MomentumTransferStep=0.02, ScaleFactor=1, OutputWorkspaceBinned='IvsQ_binned_44684', OutputWorkspace='IvsQ_44684', OutputWorkspaceWavelength='IvsLam_44684')
printResults('IvsQ_binned_44684', 0.3)
# INTER example - TOF workspace detectors ARE in the correct position on load so are NOT corrected by RROA
LoadISISNexus(Filename='INTER00043583', OutputWorkspace='TOF_43583')
ReflectometryReductionOneAuto(InputWorkspace='TOF_43583', CorrectDetectors='0', AnalysisMode='MultiDetectorAnalysis', ProcessingInstructions='45-69', ThetaIn=1.3, MomentumTransferStep=-0.0001, ScaleFactor=1, OutputWorkspaceBinned='IvsQ_binned_43583', OutputWorkspace='IvsQ_43583', OutputWorkspaceWavelength='IvsLam_43583')
printResults('IvsQ_binned_43583', 1.3)
``` | priority | incorrect angle used in conversion to q in rro in when we do summation in lambda we use to sum in lambda across a set of detectors of interest the resulting summed workspace has a single spectrum with detector angle taken from the centre of the detectors of interest the range of detectors of interest is often centered on the scattering angle twotheta but not always so the final detector angle is somewhat arbitrary this angle gets used by convertunits in the conversion to q and therefore we get different x values depending on the set of detectors used instead the correct angle to use in the conversion to q is half the scattering angle twotheta even when the range of detectors is notionally centered on the twotheta pixel the pixel s centre may not be exactly at twotheta anyway so the angle is still slightly wrong there are a few approaches we could take but from discussion with scientists we have agreed to correct the position of the detector group as an extra step in reflectometryreductionone it should be corrected to the angle provided by the thetain property if this is given if thetain is not given no correction should be done ideally this correction would be done in a separate algorithm because it would also be useful for ill note that this should only be done when summing in lambda the key steps of the reflectometryreductionone algorithm when summing in lambda start makedetectorws calls groupdetectors then some normalisations by monitors and transmission runs are done the new correction should be done after these normalisations before the result of makeivslam is returned the result of this function is then converted to q using convertunits note that we have an algorithm called specularreflectionpositioncorrect which corrects detector positions but this works on instrument component names so i don t think it can be used for a detector group it may be useful to take a look at it though to test import math def printresults name thetain ws mtd print expected theta thetain print actual theta ws detectortwotheta ws getdetector math pi offspec example tof workspace detectors are not in the correct position on load so are corrected by rroa loadisisnexus filename outputworkspace tof loadisisnexus filename outputworkspace trans reflectometryreductiononeauto inputworkspace tof processinginstructions thetain correctdetectors detectorcorrectiontype rotatearoundsample wavelengthmin wavelengthmax monitorbackgroundwavelengthmin monitorbackgroundwavelengthmax monitorintegrationwavelengthmin monitorintegrationwavelengthmax firsttransmissionrun trans momentumtransferstep scalefactor outputworkspacebinned ivsq binned outputworkspace ivsq outputworkspacewavelength ivslam printresults ivsq binned inter example tof workspace detectors are in the correct position on load so are not corrected by rroa loadisisnexus filename outputworkspace tof reflectometryreductiononeauto inputworkspace tof correctdetectors analysismode multidetectoranalysis processinginstructions thetain momentumtransferstep scalefactor outputworkspacebinned ivsq binned outputworkspace ivsq outputworkspacewavelength ivslam printresults ivsq binned | 1 |
564,202 | 16,720,578,999 | IssuesEvent | 2021-06-10 06:43:14 | ita-social-projects/TeachUA | https://api.github.com/repos/ita-social-projects/TeachUA | closed | [Гуртки page] Address is not displayed near the location icon on the club's pop-up window | Priority: High bug | Environment: Windows 7, Service Pack 1, Google Chrome, 90.0.4430.212 (Розробка) (64-розрядна версія).
Reproducible: always.
Build found: the last commit 07.06.2021
Preconditions
Open [https://speak-ukrainian.org.ua/dev/](https://speak-ukrainian.org.ua/dev/)
**Steps to reproduce**
1. Go to 'Гуртки' menu navigation tab
2. Click on the club's card and open club's pop-up window (not on 'Детальніше' button)
Actual result
The address is not displayed near the location icon on the club's pop-up window

Expected result
Address of the club should be displayed.
If club is without location - 'Доступний онлайн' should be displayed
| 1.0 | [Гуртки page] Address is not displayed near the location icon on the club's pop-up window - Environment: Windows 7, Service Pack 1, Google Chrome, 90.0.4430.212 (Розробка) (64-розрядна версія).
Reproducible: always.
Build found: the last commit 07.06.2021
Preconditions
Open [https://speak-ukrainian.org.ua/dev/](https://speak-ukrainian.org.ua/dev/)
**Steps to reproduce**
1. Go to 'Гуртки' menu navigation tab
2. Click on the club's card and open club's pop-up window (not on 'Детальніше' button)
Actual result
The address is not displayed near the location icon on the club's pop-up window

Expected result
Address of the club should be displayed.
If club is without location - 'Доступний онлайн' should be displayed
| priority | address is not displayed near the location icon on the club s pop up window environment windows service pack google chrome розробка розрядна версія reproducible always build found the last commit preconditions open steps to reproduce go to гуртки menu navigation tab click on the club s card and open club s pop up window not on детальніше button actual result the address is not displayed near the location icon on the club s pop up window expected result address of the club should be displayed if club is without location доступний онлайн should be displayed | 1 |
178,588 | 6,612,557,739 | IssuesEvent | 2017-09-20 04:45:31 | crowdAI/crowdai | https://api.github.com/repos/crowdAI/crowdai | opened | Email preferences / email broken | bug high priority | When I click on Email Preferences link in crowdAI digest email, I first get the message the email preferences link is invalid.
<img width="266" alt="screen shot 2017-09-20 at 6 34 22 am" src="https://user-images.githubusercontent.com/215057/30626970-c5d6fe6a-9dcd-11e7-9fb8-76fa5ee08010.png">
Then, once I log in, I get a "too many redirects" error.
<img width="416" alt="screen shot 2017-09-20 at 6 35 46 am" src="https://user-images.githubusercontent.com/215057/30627016-fcff9604-9dcd-11e7-912f-92dd6ab9143b.png">
If I manually go to the email preferences, then I cannot change them - if I save, I get an error, which itself looks like an error:
<img width="683" alt="screen shot 2017-09-20 at 6 43 05 am" src="https://user-images.githubusercontent.com/215057/30627203-21fd9e8c-9dcf-11e7-9935-b28d6b5a071e.png">
This was orginally reported by a frustrated user who wrote:
>Hey,
>
> The email preferences link in your daily digest isn't working. It gives a "TOO MANY REDIRECTS" error in 2 different browsers.
>
> The "save button" on your website's email preferences directs to an error webpage (code 5 0?!)
>
> I love what you do, but that situation is on the ridiculous side. I recommend you sort it out before everyone reports your emails as spam :/
>
> Have a good day! | 1.0 | Email preferences / email broken - When I click on Email Preferences link in crowdAI digest email, I first get the message the email preferences link is invalid.
<img width="266" alt="screen shot 2017-09-20 at 6 34 22 am" src="https://user-images.githubusercontent.com/215057/30626970-c5d6fe6a-9dcd-11e7-9fb8-76fa5ee08010.png">
Then, once I log in, I get a "too many redirects" error.
<img width="416" alt="screen shot 2017-09-20 at 6 35 46 am" src="https://user-images.githubusercontent.com/215057/30627016-fcff9604-9dcd-11e7-912f-92dd6ab9143b.png">
If I manually go to the email preferences, then I cannot change them - if I save, I get an error, which itself looks like an error:
<img width="683" alt="screen shot 2017-09-20 at 6 43 05 am" src="https://user-images.githubusercontent.com/215057/30627203-21fd9e8c-9dcf-11e7-9935-b28d6b5a071e.png">
This was orginally reported by a frustrated user who wrote:
>Hey,
>
> The email preferences link in your daily digest isn't working. It gives a "TOO MANY REDIRECTS" error in 2 different browsers.
>
> The "save button" on your website's email preferences directs to an error webpage (code 5 0?!)
>
> I love what you do, but that situation is on the ridiculous side. I recommend you sort it out before everyone reports your emails as spam :/
>
> Have a good day! | priority | email preferences email broken when i click on email preferences link in crowdai digest email i first get the message the email preferences link is invalid img width alt screen shot at am src then once i log in i get a too many redirects error img width alt screen shot at am src if i manually go to the email preferences then i cannot change them if i save i get an error which itself looks like an error img width alt screen shot at am src this was orginally reported by a frustrated user who wrote hey the email preferences link in your daily digest isn t working it gives a too many redirects error in different browsers the save button on your website s email preferences directs to an error webpage code i love what you do but that situation is on the ridiculous side i recommend you sort it out before everyone reports your emails as spam have a good day | 1 |
215,069 | 7,286,422,352 | IssuesEvent | 2018-02-23 09:41:26 | kcigeospatial/balt_co_ETL | https://api.github.com/repos/kcigeospatial/balt_co_ETL | closed | Stormwater - RestBMP - LAST_CHANGE is not calculated for target database | high priority item | For the output data field RestBMP.LAST_CHANGE, the system should calculate values as the current system date at the time of processing. Currently, values are outputted as null in this field (target field is required).
This same logic is also present for BMPPOI, and it is working correctly there in some instances. See #49 for details of the circumstances in which this datestamp logic works correctly. | 1.0 | Stormwater - RestBMP - LAST_CHANGE is not calculated for target database - For the output data field RestBMP.LAST_CHANGE, the system should calculate values as the current system date at the time of processing. Currently, values are outputted as null in this field (target field is required).
This same logic is also present for BMPPOI, and it is working correctly there in some instances. See #49 for details of the circumstances in which this datestamp logic works correctly. | priority | stormwater restbmp last change is not calculated for target database for the output data field restbmp last change the system should calculate values as the current system date at the time of processing currently values are outputted as null in this field target field is required this same logic is also present for bmppoi and it is working correctly there in some instances see for details of the circumstances in which this datestamp logic works correctly | 1 |
561,429 | 16,617,132,983 | IssuesEvent | 2021-06-02 18:13:33 | YunoHost/issues | https://api.github.com/repos/YunoHost/issues | closed | Changing user password for user in web admin does not change it | :key: Authentication :maple_leaf: Web administration :space_invader: bug Priority: high | I was wondering if i am the only one. Version 4.2.4 ( after many upgrades )
Discovered that today after a user lost his password.
Creating a new user with password does work, but changing user password from web admin does not.
I had to do it in phpldapadmin using 'clear' password that ends into 'crypt' once saved.
I tried with another user and hit same problem, didn't yet investigated it more deeply. | 1.0 | Changing user password for user in web admin does not change it - I was wondering if i am the only one. Version 4.2.4 ( after many upgrades )
Discovered that today after a user lost his password.
Creating a new user with password does work, but changing user password from web admin does not.
I had to do it in phpldapadmin using 'clear' password that ends into 'crypt' once saved.
I tried with another user and hit same problem, didn't yet investigated it more deeply. | priority | changing user password for user in web admin does not change it i was wondering if i am the only one version after many upgrades discovered that today after a user lost his password creating a new user with password does work but changing user password from web admin does not i had to do it in phpldapadmin using clear password that ends into crypt once saved i tried with another user and hit same problem didn t yet investigated it more deeply | 1 |
687,325 | 23,522,004,979 | IssuesEvent | 2022-08-19 07:07:27 | CarmenMariaMP/Clap | https://api.github.com/repos/CarmenMariaMP/Clap | closed | H10 - View registered user profiles | Epic high-priority | Allow users to view the profiles of companies and content creators. | 1.0 | H10 - View registered user profiles - Allow users to view the profiles of companies and content creators. | priority | view registered user profiles allow users to view the profiles of companies and content creators | 1 |
786,195 | 27,638,186,843 | IssuesEvent | 2023-03-10 15:57:59 | open-sauced/insights | https://api.github.com/repos/open-sauced/insights | closed | Bug: user profile page returns 404 | 🐛 bug high-priority | ### Describe the bug
When i try to visit my profile, i get an error
<img width="1433" alt="image" src="https://user-images.githubusercontent.com/62995161/217594089-8a834b53-4f78-4d57-a44b-9b4b6952f47d.png">
but if I change the username on the URL to all lowercase, I get to see the profile
<img width="1436" alt="image" src="https://user-images.githubusercontent.com/62995161/217594466-51974b21-e056-4cce-ab03-5583c9601101.png">
### Steps to reproduce
Click and visit a user profile from the user whose username is not all lowercase
### Affected services
insights.opensauced.pizza
### Platforms
Desktop, Mobile
### Browsers
_No response_
### Environment
Production, Development, Testing
### Additional context
converting users username to lowercase before plugin to any link will solve this issue :pizza:
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [X] I agree to follow this project's Contribution Docs | 1.0 | Bug: user profile page returns 404 - ### Describe the bug
When i try to visit my profile, i get an error
<img width="1433" alt="image" src="https://user-images.githubusercontent.com/62995161/217594089-8a834b53-4f78-4d57-a44b-9b4b6952f47d.png">
but if I change the username on the URL to all lowercase, I get to see the profile
<img width="1436" alt="image" src="https://user-images.githubusercontent.com/62995161/217594466-51974b21-e056-4cce-ab03-5583c9601101.png">
### Steps to reproduce
Click and visit a user profile from the user whose username is not all lowercase
### Affected services
insights.opensauced.pizza
### Platforms
Desktop, Mobile
### Browsers
_No response_
### Environment
Production, Development, Testing
### Additional context
converting users username to lowercase before plugin to any link will solve this issue :pizza:
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [X] I agree to follow this project's Contribution Docs | priority | bug user profile page returns describe the bug when i try to visit my profile i get an error img width alt image src but if i change the username on the url to all lowercase i get to see the profile img width alt image src steps to reproduce click and visit a user profile from the user whose username is not all lowercase affected services insights opensauced pizza platforms desktop mobile browsers no response environment production development testing additional context converting users username to lowercase before plugin to any link will solve this issue pizza code of conduct i agree to follow this project s code of conduct contributing docs i agree to follow this project s contribution docs | 1 |
457,290 | 13,154,234,592 | IssuesEvent | 2020-08-10 06:14:48 | OpenSRP/opensrp-server-web | https://api.github.com/repos/OpenSRP/opensrp-server-web | closed | Getting Bad Gateway error while connecting from reveal-frontend to stage server | Priority: High | We were trying to connect to reveal stage using a local reveal-frontend copy. we have modified .env files to point to stage. But while try to login we get this error
Steps we did:
1) git clone reveal-front end for the below version

2) Modified .env file (shared via email)
3) yarn and yarn start
4) navigated to localhost:3000 and clicked OpenSRP Login
5) Got the error

| 1.0 | Getting Bad Gateway error while connecting from reveal-frontend to stage server - We were trying to connect to reveal stage using a local reveal-frontend copy. we have modified .env files to point to stage. But while try to login we get this error
Steps we did:
1) git clone reveal-front end for the below version

2) Modified .env file (shared via email)
3) yarn and yarn start
4) navigated to localhost:3000 and clicked OpenSRP Login
5) Got the error

| priority | getting bad gateway error while connecting from reveal frontend to stage server we were trying to connect to reveal stage using a local reveal frontend copy we have modified env files to point to stage but while try to login we get this error steps we did git clone reveal front end for the below version modified env file shared via email yarn and yarn start navigated to localhost and clicked opensrp login got the error | 1 |
617,218 | 19,345,391,100 | IssuesEvent | 2021-12-15 10:16:02 | epam/Indigo | https://api.github.com/repos/epam/Indigo | closed | Error message while trying to open InChi AuxInfo file in Standalone mode | Bug High priority | **Steps to Reproduce**
1. Execute in standalone mode /v2/indigo/layout with the following parameters:
struct:
[struct.zip](https://github.com/epam/Indigo/files/6991521/struct.zip)
options: {smart-layout: true, ignore-stereochemistry-errors: true, mass-skip-error-on-pseudoatoms: false,…}
output_format: "chemical/x-mdl-molfile"
**Expected behavior**
The same structure as in in struct.zip
**Actual behavior**
Error: Convert error!
Given string could not be loaded as (query or plaint) molecule or reaction,
see the error messages: 'scanner: appendLine(): end of stream',
'scanner: appendLine(): end of stream',
'RXN loader: bad header InChI=1S/C4H10/c1-3-4-2/h3-4H2,1-2H3',
'RXN loader: bad header InChI=1S/C4H10/c1-3-4-2/h3-4H2,1-2H3'
| 1.0 | Error message while trying to open InChi AuxInfo file in Standalone mode - **Steps to Reproduce**
1. Execute in standalone mode /v2/indigo/layout with the following parameters:
struct:
[struct.zip](https://github.com/epam/Indigo/files/6991521/struct.zip)
options: {smart-layout: true, ignore-stereochemistry-errors: true, mass-skip-error-on-pseudoatoms: false,…}
output_format: "chemical/x-mdl-molfile"
**Expected behavior**
The same structure as in in struct.zip
**Actual behavior**
Error: Convert error!
Given string could not be loaded as (query or plaint) molecule or reaction,
see the error messages: 'scanner: appendLine(): end of stream',
'scanner: appendLine(): end of stream',
'RXN loader: bad header InChI=1S/C4H10/c1-3-4-2/h3-4H2,1-2H3',
'RXN loader: bad header InChI=1S/C4H10/c1-3-4-2/h3-4H2,1-2H3'
| priority | error message while trying to open inchi auxinfo file in standalone mode steps to reproduce execute in standalone mode indigo layout with the following parameters struct options smart layout true ignore stereochemistry errors true mass skip error on pseudoatoms false … output format chemical x mdl molfile expected behavior the same structure as in in struct zip actual behavior error convert error given string could not be loaded as query or plaint molecule or reaction see the error messages scanner appendline end of stream scanner appendline end of stream rxn loader bad header inchi rxn loader bad header inchi | 1 |
323,554 | 9,856,402,180 | IssuesEvent | 2019-06-19 22:02:18 | openstax/tutor | https://api.github.com/repos/openstax/tutor | opened | APUSH: APLO Tags Not Displaying in Question Library | priority1-high | ### Description
Production teams can now add AP Learning Objectives to APUSH assessments -- woohoo!
These tags should display in the Question Library so that instructors can use them to select an assign assessments. Currently they are not displaying. This may hold up production and QA efforts.
**To Reproduce**
Steps to reproduce the behavior are
1. Go to 'https://tutor-content.openstax.org/course/88/questions#'
2. Get assessments for all sections, or at least 1.2
3. Review assessment ID 17564 and note that no APLO tag displays
(APLO tags start with HTS or RP)
4. Note that assessment ID 17564 is tagged with two HTS and one RP in Exercises. (You can do this by searching by UID 17564 in Exercises: https://exercises.openstax.org/search)
**Expected behavior**
The HTS and RP tags should display, as they do on the assessment in Exercises:

*Additional context**
Original card to implement APLO tags in APUSH: https://github.com/openstax/tutor/issues/385
### Acceptance Tests
Additional acceptance tests that are required by this bug fix.
**title**: template
**categories**: Interactive Component, Navigation, TOC, Math, Book Content, Other
> GIVEN something
> AND something else
> WHEN something
> AND something else
> THEN something
> AND something else
### Checklist for Done
2-CODE
- [ ] A pull request is opened and linked to this issue. The automated pull request checks pass.
- [ ] The change has been approved by other developers.
- [ ] The pull request is merged into master, and the change branch is deleted.
- [ ] If there are new acceptance tests, they are finalized in "Given / When / Then" format.
- [ ] Regression test categories are identified on the issue. Categories should be referenced if any changes might affect them, even if the intended functionality is unchanged.
- [ ] Add milestone tag
4A-UX REVIEW
- [ ] If UI changes, UX reviews and verifies.
5A-FUNCT VER
- [ ] All the acceptance tests for this issue have passed.
5C-REGRESSION
- [ ] The test plan in testrail passes
| 1.0 | APUSH: APLO Tags Not Displaying in Question Library - ### Description
Production teams can now add AP Learning Objectives to APUSH assessments -- woohoo!
These tags should display in the Question Library so that instructors can use them to select an assign assessments. Currently they are not displaying. This may hold up production and QA efforts.
**To Reproduce**
Steps to reproduce the behavior are
1. Go to 'https://tutor-content.openstax.org/course/88/questions#'
2. Get assessments for all sections, or at least 1.2
3. Review assessment ID 17564 and note that no APLO tag displays
(APLO tags start with HTS or RP)
4. Note that assessment ID 17564 is tagged with two HTS and one RP in Exercises. (You can do this by searching by UID 17564 in Exercises: https://exercises.openstax.org/search)
**Expected behavior**
The HTS and RP tags should display, as they do on the assessment in Exercises:

*Additional context**
Original card to implement APLO tags in APUSH: https://github.com/openstax/tutor/issues/385
### Acceptance Tests
Additional acceptance tests that are required by this bug fix.
**title**: template
**categories**: Interactive Component, Navigation, TOC, Math, Book Content, Other
> GIVEN something
> AND something else
> WHEN something
> AND something else
> THEN something
> AND something else
### Checklist for Done
2-CODE
- [ ] A pull request is opened and linked to this issue. The automated pull request checks pass.
- [ ] The change has been approved by other developers.
- [ ] The pull request is merged into master, and the change branch is deleted.
- [ ] If there are new acceptance tests, they are finalized in "Given / When / Then" format.
- [ ] Regression test categories are identified on the issue. Categories should be referenced if any changes might affect them, even if the intended functionality is unchanged.
- [ ] Add milestone tag
4A-UX REVIEW
- [ ] If UI changes, UX reviews and verifies.
5A-FUNCT VER
- [ ] All the acceptance tests for this issue have passed.
5C-REGRESSION
- [ ] The test plan in testrail passes
| priority | apush aplo tags not displaying in question library description production teams can now add ap learning objectives to apush assessments woohoo these tags should display in the question library so that instructors can use them to select an assign assessments currently they are not displaying this may hold up production and qa efforts to reproduce steps to reproduce the behavior are go to get assessments for all sections or at least review assessment id and note that no aplo tag displays aplo tags start with hts or rp note that assessment id is tagged with two hts and one rp in exercises you can do this by searching by uid in exercises expected behavior the hts and rp tags should display as they do on the assessment in exercises additional context original card to implement aplo tags in apush acceptance tests additional acceptance tests that are required by this bug fix title template categories interactive component navigation toc math book content other given something and something else when something and something else then something and something else checklist for done code a pull request is opened and linked to this issue the automated pull request checks pass the change has been approved by other developers the pull request is merged into master and the change branch is deleted if there are new acceptance tests they are finalized in given when then format regression test categories are identified on the issue categories should be referenced if any changes might affect them even if the intended functionality is unchanged add milestone tag ux review if ui changes ux reviews and verifies funct ver all the acceptance tests for this issue have passed regression the test plan in testrail passes | 1 |
200,756 | 7,015,894,054 | IssuesEvent | 2017-12-21 00:04:42 | OracleStation/OracleStation | https://api.github.com/repos/OracleStation/OracleStation | opened | Atmos pipes no longer appear on the map files aaaaaaaaaaaaaaaaaaaaa | FUCK High Priority Oversight | 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 1.0 | Atmos pipes no longer appear on the map files aaaaaaaaaaaaaaaaaaaaa - 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | priority | atmos pipes no longer appear on the map files aaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | 1 |
399,591 | 11,757,422,379 | IssuesEvent | 2020-03-13 13:39:36 | zebscripts/Labeler | https://api.github.com/repos/zebscripts/Labeler | opened | `labels.json` write permissions | (ノಠ益ಠ)ノ彡┻━┻ Priority: High :fire: Status: In Progress :clock1030: Type: Bug :beetle: | I've encountered a bug where `labels.json` gets saved to the current directory, instead of the package directory. I'm not sure if it could even be considered a bug, as some users might actually like it that way. Though it was not intentional and I personally don't like it.
Version [@2.1.1](https://www.npmjs.com/package/labeler/v/2.1.1) uses current directory, version [@2.1.4](https://www.npmjs.com/package/labeler/v/2.1.4) uses package directory. The drawback of using the package directory is having to run `labeler` with sudo, since `fs` needs root permissions to write into the package directory.
I'm still unsure on how I'll handle this situation. Pick the version that suits you best for now. | 1.0 | `labels.json` write permissions - I've encountered a bug where `labels.json` gets saved to the current directory, instead of the package directory. I'm not sure if it could even be considered a bug, as some users might actually like it that way. Though it was not intentional and I personally don't like it.
Version [@2.1.1](https://www.npmjs.com/package/labeler/v/2.1.1) uses current directory, version [@2.1.4](https://www.npmjs.com/package/labeler/v/2.1.4) uses package directory. The drawback of using the package directory is having to run `labeler` with sudo, since `fs` needs root permissions to write into the package directory.
I'm still unsure on how I'll handle this situation. Pick the version that suits you best for now. | priority | labels json write permissions i ve encountered a bug where labels json gets saved to the current directory instead of the package directory i m not sure if it could even be considered a bug as some users might actually like it that way though it was not intentional and i personally don t like it version uses current directory version uses package directory the drawback of using the package directory is having to run labeler with sudo since fs needs root permissions to write into the package directory i m still unsure on how i ll handle this situation pick the version that suits you best for now | 1 |
68,515 | 3,289,011,273 | IssuesEvent | 2015-10-29 17:14:39 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | Template registration on example data (T2) is not correct | bug priority: high sct_register_to_template | 
Could be the steps that are not good, or the straightening that produces a wrong warping field... | 1.0 | Template registration on example data (T2) is not correct - 
Could be the steps that are not good, or the straightening that produces a wrong warping field... | priority | template registration on example data is not correct could be the steps that are not good or the straightening that produces a wrong warping field | 1 |
324,791 | 9,912,492,299 | IssuesEvent | 2019-06-28 09:09:09 | openbmc/openbmc-test-automation | https://api.github.com/repos/openbmc/openbmc-test-automation | closed | Redfish local user management | Priority High | - [x] Test plan
- [x] Test and verify
- [x] Automation
Sandhya preparing the test plan for it
Docs:
https://github.com/openbmc/docs/blob/master/user_management.md
Dev: https://github.com/ibm-openbmc/dev/issues/236
| 1.0 | Redfish local user management - - [x] Test plan
- [x] Test and verify
- [x] Automation
Sandhya preparing the test plan for it
Docs:
https://github.com/openbmc/docs/blob/master/user_management.md
Dev: https://github.com/ibm-openbmc/dev/issues/236
| priority | redfish local user management test plan test and verify automation sandhya preparing the test plan for it docs dev | 1 |
354,776 | 10,572,171,503 | IssuesEvent | 2019-10-07 08:59:24 | eaudeweb/ozone | https://api.github.com/repos/eaudeweb/ozone | closed | Allow saving invalid records during data entry state | Component: Vue Priority: Highest Status: In progress | This implies that we have to do some extra checks when the forms are submitted (don't assume that all records are already valid). | 1.0 | Allow saving invalid records during data entry state - This implies that we have to do some extra checks when the forms are submitted (don't assume that all records are already valid). | priority | allow saving invalid records during data entry state this implies that we have to do some extra checks when the forms are submitted don t assume that all records are already valid | 1 |
277,172 | 8,621,521,371 | IssuesEvent | 2018-11-20 17:33:08 | Automattic/simplenote-electron | https://api.github.com/repos/Automattic/simplenote-electron | closed | Notes disppear when you change your password while logged in | bug priority-high | #### Steps to reproduce
1. Log in with some existing notes
2. Change your password on the website
3. See all your notes disappear
#### What I expected
For the app to verify my authentication status
#### What happened instead
All my notes disappeared and I had to log out and back in
#### OS version
macOS 10.13.1 | 1.0 | Notes disppear when you change your password while logged in - #### Steps to reproduce
1. Log in with some existing notes
2. Change your password on the website
3. See all your notes disappear
#### What I expected
For the app to verify my authentication status
#### What happened instead
All my notes disappeared and I had to log out and back in
#### OS version
macOS 10.13.1 | priority | notes disppear when you change your password while logged in steps to reproduce log in with some existing notes change your password on the website see all your notes disappear what i expected for the app to verify my authentication status what happened instead all my notes disappeared and i had to log out and back in os version macos | 1 |
443,012 | 12,754,490,198 | IssuesEvent | 2020-06-28 05:47:28 | projectacrn/acrn-hypervisor | https://api.github.com/repos/projectacrn/acrn-hypervisor | closed | [KataContainers]LaaG miss ip address when we create kata_container first with macvtap driver. | priority: P2-High type: bug | Environment
HV tag: acrn-2019w51.2-140000p
kernel_repo_commit 247a3ba9243b1fd8c2d763158d55f8791a9cac94
HW/Board
WHL
Build link
Image info
Steps
Hypervisor_Bootloader_KataContainer_UOS_Reboot
1, Setup env,
2, Setup kata https://wiki.ith.intel.com/display/OTCCWPQA/%5BHypervisor%5DHow+to+setup+Kata+Containers+with+ACRN+hypervisor
3, if we launch uos first, laag has ip address
if we launch uos first, laag has ip address; then run kata, then reboot laag, laag doesnot have ip.
If we run kata first, then launch laag, laag doesnot have ip address,
When run kata, sos will create veth72782 (I remember it should be tap0_kata?)
Expected result
step3, laag should have ip all the time.
Actual result
step3, laag miss ip address. | 1.0 | [KataContainers]LaaG miss ip address when we create kata_container first with macvtap driver. - Environment
HV tag: acrn-2019w51.2-140000p
kernel_repo_commit 247a3ba9243b1fd8c2d763158d55f8791a9cac94
HW/Board
WHL
Build link
Image info
Steps
Hypervisor_Bootloader_KataContainer_UOS_Reboot
1, Setup env,
2, Setup kata https://wiki.ith.intel.com/display/OTCCWPQA/%5BHypervisor%5DHow+to+setup+Kata+Containers+with+ACRN+hypervisor
3, if we launch uos first, laag has ip address
if we launch uos first, laag has ip address; then run kata, then reboot laag, laag doesnot have ip.
If we run kata first, then launch laag, laag doesnot have ip address,
When run kata, sos will create veth72782 (I remember it should be tap0_kata?)
Expected result
step3, laag should have ip all the time.
Actual result
step3, laag miss ip address. | priority | laag miss ip address when we create kata container first with macvtap driver environment hv tag acrn kernel repo commit hw board whl build link image info steps hypervisor bootloader katacontainer uos reboot setup env setup kata if we launch uos first laag has ip address if we launch uos first laag has ip address then run kata then reboot laag laag doesnot have ip if we run kata first then launch laag laag doesnot have ip address when run kata sos will create i remember it should be kata expected result laag should have ip all the time actual result laag miss ip address | 1 |
794,223 | 28,026,930,158 | IssuesEvent | 2023-03-28 09:44:23 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Displays aren't following code order | 😱 Bug OS Windows OS Linux About UI Priority High V. 1.9.0 | **Describe the bug**
On the latest pre-release, the order of displays is not following anymore the order of them in GAML code. Here's an example from the `Ant Foraging (Charts examples).gaml` in the _Library models_ folder :
Previous release

Current latest release

**Expected behavior**
Having displays created as before, i.e. in the same order as they appear in the GAML code.
**Desktop (please complete the following information):**
- OS: Linux / Windows
- GAMA version: 1.9.0
- Java version: 17
| 1.0 | Displays aren't following code order - **Describe the bug**
On the latest pre-release, the order of displays is not following anymore the order of them in GAML code. Here's an example from the `Ant Foraging (Charts examples).gaml` in the _Library models_ folder :
Previous release

Current latest release

**Expected behavior**
Having displays created as before, i.e. in the same order as they appear in the GAML code.
**Desktop (please complete the following information):**
- OS: Linux / Windows
- GAMA version: 1.9.0
- Java version: 17
| priority | displays aren t following code order describe the bug on the latest pre release the order of displays is not following anymore the order of them in gaml code here s an example from the ant foraging charts examples gaml in the library models folder previous release current latest release expected behavior having displays created as before i e in the same order as they appear in the gaml code desktop please complete the following information os linux windows gama version java version | 1 |
419,135 | 12,218,033,860 | IssuesEvent | 2020-05-01 18:27:55 | InstituteforDiseaseModeling/covasim | https://api.github.com/repos/InstituteforDiseaseModeling/covasim | opened | [UI 2.0] Automatic data loading | CovasimUI approved highpriority | The user should be able to select the region from a drop-down menu, and automatically load demographic and up-to-date epidemiological data for that region. The data scraping scripts have already been written by @willf ,
- [ ] Update data format (e.g. `new_death` -> `new_deaths`) so loads automatically, and trim the data to start from the first diagnosis/death
- [ ] Check that scrapers still work (at least one seems to have stopped working), and figure out how to reconcile data from multiple scrapers (or just pick the most comprehensive one and go with that)
- [ ] Check ~20 locations, including ~5 US states, ~5 high-income countries, and ~10 low-income countries, and ensure that the data look reasonable
- [ ] Write method to load the data into Covasim, including population size
- [ ] In the UI, re-enable the drop-down menu for location selection (commented out in cova_app.py currently) | 1.0 | [UI 2.0] Automatic data loading - The user should be able to select the region from a drop-down menu, and automatically load demographic and up-to-date epidemiological data for that region. The data scraping scripts have already been written by @willf ,
- [ ] Update data format (e.g. `new_death` -> `new_deaths`) so loads automatically, and trim the data to start from the first diagnosis/death
- [ ] Check that scrapers still work (at least one seems to have stopped working), and figure out how to reconcile data from multiple scrapers (or just pick the most comprehensive one and go with that)
- [ ] Check ~20 locations, including ~5 US states, ~5 high-income countries, and ~10 low-income countries, and ensure that the data look reasonable
- [ ] Write method to load the data into Covasim, including population size
- [ ] In the UI, re-enable the drop-down menu for location selection (commented out in cova_app.py currently) | priority | automatic data loading the user should be able to select the region from a drop down menu and automatically load demographic and up to date epidemiological data for that region the data scraping scripts have already been written by willf update data format e g new death new deaths so loads automatically and trim the data to start from the first diagnosis death check that scrapers still work at least one seems to have stopped working and figure out how to reconcile data from multiple scrapers or just pick the most comprehensive one and go with that check locations including us states high income countries and low income countries and ensure that the data look reasonable write method to load the data into covasim including population size in the ui re enable the drop down menu for location selection commented out in cova app py currently | 1 |
447,754 | 12,892,875,273 | IssuesEvent | 2020-07-13 20:28:26 | rstudio/gt | https://api.github.com/repos/rstudio/gt | closed | Set global font options (rather than specifying body, column, header, etc invdividually) | Difficulty: [3] Advanced Effort: [3] High Priority: [3] High Type: ★ Enhancement | After a discussion with @rich-iannone I posted this issue.
`gt` has easy and rich support for custom fonts which I love, but to change fonts across the entire table I believe the only way so far is to do the following (and if you wanted to change things like the title you'd also need to specify further targets):
``` r
library(gt)
suppressPackageStartupMessages(library(dplyr))
ggplot2::mpg %>%
head() %>%
gt() %>%
gt::tab_style(
style = list(
cell_text(font = "Fira Mono")
),
locations = list(
cells_body(gt::everything()),
cells_column_labels(gt::everything())
))
```
<sup>Created on 2020-05-15 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>

It'd be great if there were options to:
- Set a global/table level font
- Override sections optionally with the above formatting (cell body, columns, title, etc)
- Clearly understand WHERE fonts are allowed to be imported from IE:
- System Installed fonts
- Imported web fonts
- Integration w/ font packages (`extrafont`, `systemfonts`, `gridtext`, etc)
Some ideas about supporting/loading fonts via:
- `systemfonts`
- `thematic`
- `gridtext`
Tagging @apreshill for her interest as well. | 1.0 | Set global font options (rather than specifying body, column, header, etc invdividually) - After a discussion with @rich-iannone I posted this issue.
`gt` has easy and rich support for custom fonts which I love, but to change fonts across the entire table I believe the only way so far is to do the following (and if you wanted to change things like the title you'd also need to specify further targets):
``` r
library(gt)
suppressPackageStartupMessages(library(dplyr))
ggplot2::mpg %>%
head() %>%
gt() %>%
gt::tab_style(
style = list(
cell_text(font = "Fira Mono")
),
locations = list(
cells_body(gt::everything()),
cells_column_labels(gt::everything())
))
```
<sup>Created on 2020-05-15 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>

It'd be great if there were options to:
- Set a global/table level font
- Override sections optionally with the above formatting (cell body, columns, title, etc)
- Clearly understand WHERE fonts are allowed to be imported from IE:
- System Installed fonts
- Imported web fonts
- Integration w/ font packages (`extrafont`, `systemfonts`, `gridtext`, etc)
Some ideas about supporting/loading fonts via:
- `systemfonts`
- `thematic`
- `gridtext`
Tagging @apreshill for her interest as well. | priority | set global font options rather than specifying body column header etc invdividually after a discussion with rich iannone i posted this issue gt has easy and rich support for custom fonts which i love but to change fonts across the entire table i believe the only way so far is to do the following and if you wanted to change things like the title you d also need to specify further targets r library gt suppresspackagestartupmessages library dplyr mpg head gt gt tab style style list cell text font fira mono locations list cells body gt everything cells column labels gt everything created on by the it d be great if there were options to set a global table level font override sections optionally with the above formatting cell body columns title etc clearly understand where fonts are allowed to be imported from ie system installed fonts imported web fonts integration w font packages extrafont systemfonts gridtext etc some ideas about supporting loading fonts via systemfonts thematic gridtext tagging apreshill for her interest as well | 1 |
680,262 | 23,264,179,272 | IssuesEvent | 2022-08-04 15:48:34 | OpenArchive/Save-app-android | https://api.github.com/repos/OpenArchive/Save-app-android | closed | Document Location Permissions used by Save App | Priority: High | In order to republish our app, we need to document the ways in which we use sensitive permission. Today, the Play Store identifies 3 such permissions in 0.2.4-RC-11 (see `git tag` of the same):
```
android.permission.ACCESS_BACKGROUND_LOCATION
android.permission.ACCESS_COARSE_LOCATION
android.permission.ACCESS_FINE_LOCATION
```
For each of these, they want an explanation including a youtube video showing how it's used.
The scope of this ticket is to document how we use each of those 3 permissions in README.md. I have an educated guess on how, but I want to be sure we understand precisely how the app and our dependencies are using this data.
One specific question in addition: Why is `ACCESS_BACKGROUND_LOCATION` still shown by Play Store as used by 0.2.4-RC-11 when that no longer appears in the manifest? Is that a result of a dependency, or do we need to make changes elsewhere? | 1.0 | Document Location Permissions used by Save App - In order to republish our app, we need to document the ways in which we use sensitive permission. Today, the Play Store identifies 3 such permissions in 0.2.4-RC-11 (see `git tag` of the same):
```
android.permission.ACCESS_BACKGROUND_LOCATION
android.permission.ACCESS_COARSE_LOCATION
android.permission.ACCESS_FINE_LOCATION
```
For each of these, they want an explanation including a youtube video showing how it's used.
The scope of this ticket is to document how we use each of those 3 permissions in README.md. I have an educated guess on how, but I want to be sure we understand precisely how the app and our dependencies are using this data.
One specific question in addition: Why is `ACCESS_BACKGROUND_LOCATION` still shown by Play Store as used by 0.2.4-RC-11 when that no longer appears in the manifest? Is that a result of a dependency, or do we need to make changes elsewhere? | priority | document location permissions used by save app in order to republish our app we need to document the ways in which we use sensitive permission today the play store identifies such permissions in rc see git tag of the same android permission access background location android permission access coarse location android permission access fine location for each of these they want an explanation including a youtube video showing how it s used the scope of this ticket is to document how we use each of those permissions in readme md i have an educated guess on how but i want to be sure we understand precisely how the app and our dependencies are using this data one specific question in addition why is access background location still shown by play store as used by rc when that no longer appears in the manifest is that a result of a dependency or do we need to make changes elsewhere | 1 |
266,187 | 8,364,005,093 | IssuesEvent | 2018-10-03 21:20:29 | dotkom/onlineweb4 | https://api.github.com/repos/dotkom/onlineweb4 | closed | Links to company websites doesn't work without http:// | Easy Location: Dashboard Priority: High Status: Available | On a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.
There is no information to the user creating an event to add http either, so I can imagine this being a growing problem.
| 1.0 | Links to company websites doesn't work without http:// - On a company profile page the link to the company's website will only redirect the user if `http://` is specified when the link is added in the dashboard. For example, the link to AppearTV is written as `www.appeartv.com`, and redirects to `https://online.ntnu.no/company/60/www.appeartv.com`.
There is no information to the user creating an event to add http either, so I can imagine this being a growing problem.
| priority | links to company websites doesn t work without http on a company profile page the link to the company s website will only redirect the user if is specified when the link is added in the dashboard for example the link to appeartv is written as and redirects to there is no information to the user creating an event to add http either so i can imagine this being a growing problem | 1 |
222,212 | 7,430,623,832 | IssuesEvent | 2018-03-25 04:30:01 | theQRL/qrl-wallet | https://api.github.com/repos/theQRL/qrl-wallet | closed | Monitor state of any sent transactions | Priority: High Status: In Progress Type: Enhancement | Monitor the state of any sent transactions in light client. Do not allow subsequent transactions until the previous sent transactions are confirmed in the network or error out. This is to prevent OTS Key Index reuse. | 1.0 | Monitor state of any sent transactions - Monitor the state of any sent transactions in light client. Do not allow subsequent transactions until the previous sent transactions are confirmed in the network or error out. This is to prevent OTS Key Index reuse. | priority | monitor state of any sent transactions monitor the state of any sent transactions in light client do not allow subsequent transactions until the previous sent transactions are confirmed in the network or error out this is to prevent ots key index reuse | 1 |
368,238 | 10,868,639,396 | IssuesEvent | 2019-11-15 04:37:18 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Error occurred when click view in devportal button in Publisher while login devportal as tenant user | 3.0.0 3.1.0 Priority/High Type/Bug WUM | **Steps to reproduce:**
- Login to publisher as super tenant and create API.
- Go to devportal and logout as super tenant. and again log as tenant user.
- Go to API in the publisher and click view in devportal button.
Error: [2019-10-25 16:56:20,623] ERROR - ApisApiServiceImpl Requested API with Id '9b73b372-42bd-47da-a4e7-9660e363f052' not found
org.wso2.carbon.apimgt.api.APIMgtResourceNotFoundException: Failed to get API. API artifact corresponding to artifactId 9b73b372-42bd-47da-a4e7-9660e363f052 does not exist
at org.wso2.carbon.apimgt.impl.AbstractAPIManager.getAPIorAPIProductByUUID_aroundBody22(AbstractAPIManager.java:603) ~[org.wso2.carbon.apimgt.impl_6.5.349.jar:?]
at org.wso2.carbon.apimgt.impl.AbstractAPIManager.getAPIorAPIProductByUUID(AbstractAPIManager.java:543) ~[org.wso2.carbon.apimgt.impl_6.5.349.jar:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.getAPIByAPIId(ApisApiServiceImpl.java:887) [classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.apisApiIdGet(ApisApiServiceImpl.java:159) [classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.ApisApi.apisApiIdGet(ApisApi.java:135) [classes/:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:193) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:225) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-servlet-api_9.0.22.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.22.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80) [org.wso2.carbon.identity.context.rewrite.valve_1.3.6.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100) [org.wso2.carbon.identity.authz.valve_1.3.6.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:74) [org.wso2.carbon.identity.auth.valve_1.3.6.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678) [tomcat_9.0.22.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:116) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.22.wso2v1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.22.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

| 1.0 | Error occurred when click view in devportal button in Publisher while login devportal as tenant user - **Steps to reproduce:**
- Login to publisher as super tenant and create API.
- Go to devportal and logout as super tenant. and again log as tenant user.
- Go to API in the publisher and click view in devportal button.
Error: [2019-10-25 16:56:20,623] ERROR - ApisApiServiceImpl Requested API with Id '9b73b372-42bd-47da-a4e7-9660e363f052' not found
org.wso2.carbon.apimgt.api.APIMgtResourceNotFoundException: Failed to get API. API artifact corresponding to artifactId 9b73b372-42bd-47da-a4e7-9660e363f052 does not exist
at org.wso2.carbon.apimgt.impl.AbstractAPIManager.getAPIorAPIProductByUUID_aroundBody22(AbstractAPIManager.java:603) ~[org.wso2.carbon.apimgt.impl_6.5.349.jar:?]
at org.wso2.carbon.apimgt.impl.AbstractAPIManager.getAPIorAPIProductByUUID(AbstractAPIManager.java:543) ~[org.wso2.carbon.apimgt.impl_6.5.349.jar:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.getAPIByAPIId(ApisApiServiceImpl.java:887) [classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApisApiServiceImpl.apisApiIdGet(ApisApiServiceImpl.java:159) [classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.ApisApi.apisApiIdGet(ApisApi.java:135) [classes/:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:193) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103) [cxf-rt-frontend-jaxrs-3.2.8.jar:3.2.8]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:216) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:301) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:225) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-servlet-api_9.0.22.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:276) [cxf-rt-transports-http-3.2.8.jar:3.2.8]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat_9.0.22.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80) [org.wso2.carbon.identity.context.rewrite.valve_1.3.6.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100) [org.wso2.carbon.identity.authz.valve_1.3.6.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:74) [org.wso2.carbon.identity.auth.valve_1.3.6.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:678) [tomcat_9.0.22.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:116) [org.wso2.carbon.tomcat.ext_4.5.1.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) [tomcat_9.0.22.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat_9.0.22.wso2v1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat_9.0.22.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

| priority | error occurred when click view in devportal button in publisher while login devportal as tenant user steps to reproduce login to publisher as super tenant and create api go to devportal and logout as super tenant and again log as tenant user go to api in the publisher and click view in devportal button error error apisapiserviceimpl requested api with id not found org carbon apimgt api apimgtresourcenotfoundexception failed to get api api artifact corresponding to artifactid does not exist at org carbon apimgt impl abstractapimanager getapiorapiproductbyuuid abstractapimanager java at org carbon apimgt impl abstractapimanager getapiorapiproductbyuuid abstractapimanager java at org carbon apimgt rest api store impl apisapiserviceimpl getapibyapiid apisapiserviceimpl java at org carbon apimgt rest api store impl apisapiserviceimpl apisapiidget apisapiserviceimpl java at org carbon apimgt rest api store apisapi apisapiidget apisapi java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet doget abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java | 1 |
98,820 | 4,031,612,495 | IssuesEvent | 2016-05-18 17:46:39 | USC-CSSL/TACIT | https://api.github.com/repos/USC-CSSL/TACIT | closed | LDA Plugin failed | bug High Priority Topic Modelling | I used the reddit crawler to downloaded as follows

Error message is:
LDA not successful. | 1.0 | LDA Plugin failed - I used the reddit crawler to downloaded as follows

Error message is:
LDA not successful. | priority | lda plugin failed i used the reddit crawler to downloaded as follows error message is lda not successful | 1 |
758,615 | 26,562,066,432 | IssuesEvent | 2023-01-20 16:40:18 | nexB/scancode-toolkit | https://api.github.com/repos/nexB/scancode-toolkit | closed | ScientiaMobile commercial license not detected | bug new and improved data Priority: high | ### Description
Download Haproxy at: http://www.haproxy.org/download/2.6/src/snapshot/haproxy-ss-20221026.tar.gz
File ```contrib/wurfl/wurfl/wurfl.h``` contains the following header:
```
/*
* InFuze C API - HAPROXY Dummy library version of include
*
* Copyright (c) ScientiaMobile, Inc.
* http://www.scientiamobile.com
*
* This software package is the property of ScientiaMobile Inc. and is licensed
* commercially according to a contract between the Licensee and ScientiaMobile Inc. (Licensor).
* If you represent the Licensee, please refer to the licensing agreement which has been signed
* between the two parties. If you do not represent the Licensee, you are not authorized to use
* this software in any way.
*
*/
```
When running ScanCode, this commercial license is not detected.
SPDX result is:
```
# File
FileName: ./haproxy-ss-20221026/contrib/wurfl/wurfl/wurfl.h
SPDXID: SPDXRef-138
FileChecksum: SHA1: a4272af065e6f2201c0cc431cc72da9372723419
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NONE
FileCopyrightText: <text>Copyright (c) ScientiaMobile, Inc. http://www.scientiamobile.com
</text>
```
### How To Reproduce
```
scancode -c -l -i --license-text --spdx-tv haproxy_ss_20221026.spdx haproxy-ss-20221026
```
### System configuration
* What OS are you running on? Linux Ubuntu 22.10
* What version of scancode-toolkit was used to generate the scan file?
```
scancode --version
ScanCode version: 31.2.1
ScanCode Output Format version: 2.0.0
SPDX License list version: 3.17
```
* What installation method was used to install/run scancode? pip
| 1.0 | ScientiaMobile commercial license not detected - ### Description
Download Haproxy at: http://www.haproxy.org/download/2.6/src/snapshot/haproxy-ss-20221026.tar.gz
File ```contrib/wurfl/wurfl/wurfl.h``` contains the following header:
```
/*
* InFuze C API - HAPROXY Dummy library version of include
*
* Copyright (c) ScientiaMobile, Inc.
* http://www.scientiamobile.com
*
* This software package is the property of ScientiaMobile Inc. and is licensed
* commercially according to a contract between the Licensee and ScientiaMobile Inc. (Licensor).
* If you represent the Licensee, please refer to the licensing agreement which has been signed
* between the two parties. If you do not represent the Licensee, you are not authorized to use
* this software in any way.
*
*/
```
When running ScanCode, this commercial license is not detected.
SPDX result is:
```
# File
FileName: ./haproxy-ss-20221026/contrib/wurfl/wurfl/wurfl.h
SPDXID: SPDXRef-138
FileChecksum: SHA1: a4272af065e6f2201c0cc431cc72da9372723419
LicenseConcluded: NOASSERTION
LicenseInfoInFile: NONE
FileCopyrightText: <text>Copyright (c) ScientiaMobile, Inc. http://www.scientiamobile.com
</text>
```
### How To Reproduce
```
scancode -c -l -i --license-text --spdx-tv haproxy_ss_20221026.spdx haproxy-ss-20221026
```
### System configuration
* What OS are you running on? Linux Ubuntu 22.10
* What version of scancode-toolkit was used to generate the scan file?
```
scancode --version
ScanCode version: 31.2.1
ScanCode Output Format version: 2.0.0
SPDX License list version: 3.17
```
* What installation method was used to install/run scancode? pip
| priority | scientiamobile commercial license not detected description download haproxy at file contrib wurfl wurfl wurfl h contains the following header infuze c api haproxy dummy library version of include copyright c scientiamobile inc this software package is the property of scientiamobile inc and is licensed commercially according to a contract between the licensee and scientiamobile inc licensor if you represent the licensee please refer to the licensing agreement which has been signed between the two parties if you do not represent the licensee you are not authorized to use this software in any way when running scancode this commercial license is not detected spdx result is file filename haproxy ss contrib wurfl wurfl wurfl h spdxid spdxref filechecksum licenseconcluded noassertion licenseinfoinfile none filecopyrighttext copyright c scientiamobile inc how to reproduce scancode c l i license text spdx tv haproxy ss spdx haproxy ss system configuration what os are you running on linux ubuntu what version of scancode toolkit was used to generate the scan file scancode version scancode version scancode output format version spdx license list version what installation method was used to install run scancode pip | 1 |
529,150 | 15,380,977,597 | IssuesEvent | 2021-03-02 21:55:20 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | all spots removal shapes from v1 are lost | bug: pending difficulty: hard priority: high |
**Describe the bug**
All spot removal from spot version (module is now at v2) are lost.
The module has been moved from v1 to v2 back in 2013 in commit e47fb85ea8429520cdfee73b1bc666f078da15f6 when introducing the new mask implementation.
**To Reproduce**
Use attached Lr XMP (which create a spot mask in v1) and load the mire1.cr2 file from integration test.
See that no shapes are created.
**Expected behavior**
The spot removal module should have 3 shapes.
**Platform (please complete the following information):**
- Darktable Version: master/3.4/3.2/3.0
- OS: Linux (probably all).
**Additional context**
I have been tracked this done and found the root cause but I have no solution to fix that at the moment.
What happens:
1. the spot legacy_params() is called and we transform the shapes from v1 which are directly in the modules parameters to the new mask support.
2. We create the shapes in legacy_params. The groups are not created but I have a patch that I'll propose later to fix that.
3. The last step needed is to change the blends params for the mask_id in this struct to be the id of the created group. But this is not possible in legacy_params as blend params are supposed to be updated in dt_develop_blend_legacy_params. | 1.0 | all spots removal shapes from v1 are lost -
**Describe the bug**
All spot removal from spot version (module is now at v2) are lost.
The module has been moved from v1 to v2 back in 2013 in commit e47fb85ea8429520cdfee73b1bc666f078da15f6 when introducing the new mask implementation.
**To Reproduce**
Use attached Lr XMP (which create a spot mask in v1) and load the mire1.cr2 file from integration test.
See that no shapes are created.
**Expected behavior**
The spot removal module should have 3 shapes.
**Platform (please complete the following information):**
- Darktable Version: master/3.4/3.2/3.0
- OS: Linux (probably all).
**Additional context**
I have been tracked this done and found the root cause but I have no solution to fix that at the moment.
What happens:
1. the spot legacy_params() is called and we transform the shapes from v1 which are directly in the modules parameters to the new mask support.
2. We create the shapes in legacy_params. The groups are not created but I have a patch that I'll propose later to fix that.
3. The last step needed is to change the blends params for the mask_id in this struct to be the id of the created group. But this is not possible in legacy_params as blend params are supposed to be updated in dt_develop_blend_legacy_params. | priority | all spots removal shapes from are lost describe the bug all spot removal from spot version module is now at are lost the module has been moved from to back in in commit when introducing the new mask implementation to reproduce use attached lr xmp which create a spot mask in and load the file from integration test see that no shapes are created expected behavior the spot removal module should have shapes platform please complete the following information darktable version master os linux probably all additional context i have been tracked this done and found the root cause but i have no solution to fix that at the moment what happens the spot legacy params is called and we transform the shapes from which are directly in the modules parameters to the new mask support we create the shapes in legacy params the groups are not created but i have a patch that i ll propose later to fix that the last step needed is to change the blends params for the mask id in this struct to be the id of the created group but this is not possible in legacy params as blend params are supposed to be updated in dt develop blend legacy params | 1 |
583,139 | 17,378,167,093 | IssuesEvent | 2021-07-31 05:40:07 | tenacityteam/tenacity | https://api.github.com/repos/tenacityteam/tenacity | closed | Copious memory leaks during record and playback | bug high priority in-progress | <!--
IMPORTANT! READ
Any spam issues will be deleted.
Issues are not a place to go ask support questions.
Please post **confirmed** bugs when possible.
Please mark the checkbox below (Use "x" to fill the checkboxes, example: [x])
-->
- [ ] I have read the specified guidelines for issues
**Describe the bug**
Pull request #412 introduced what could be a serious memory leak during recording and playback.
You would have known it soon enough in testing.
I did not test Tenacity. I knew it from inspection of the source code.
Update: Actually, it won't leak memory during recording -- unless you do overdub. But it will happen during playback.
**To Reproduce**
Record or generate some sound.
Select it, and play it back in a loop (Shift + Space).
Open macOS Activity Monitor, or Windows Task Manager, or similar utility.
Observe memory consumption of Audacity. Write down that number.
Go to bed.
Wake up.
Compare with current readout, if the program is still running.
**Expected behavior**
I haven't actually done this experiment. But I predict an unhappy surprise.
**Screenshots**
None
**Additional information (please complete the following information):**
- OS: All
- Version: Since 047729727a841a53bb13c2791912ae154593115d
**Additional context**
None | 1.0 | Copious memory leaks during record and playback - <!--
IMPORTANT! READ
Any spam issues will be deleted.
Issues are not a place to go ask support questions.
Please post **confirmed** bugs when possible.
Please mark the checkbox below (Use "x" to fill the checkboxes, example: [x])
-->
- [ ] I have read the specified guidelines for issues
**Describe the bug**
Pull request #412 introduced what could be a serious memory leak during recording and playback.
You would have known it soon enough in testing.
I did not test Tenacity. I knew it from inspection of the source code.
Update: Actually, it won't leak memory during recording -- unless you do overdub. But it will happen during playback.
**To Reproduce**
Record or generate some sound.
Select it, and play it back in a loop (Shift + Space).
Open macOS Activity Monitor, or Windows Task Manager, or similar utility.
Observe memory consumption of Audacity. Write down that number.
Go to bed.
Wake up.
Compare with current readout, if the program is still running.
**Expected behavior**
I haven't actually done this experiment. But I predict an unhappy surprise.
**Screenshots**
None
**Additional information (please complete the following information):**
- OS: All
- Version: Since 047729727a841a53bb13c2791912ae154593115d
**Additional context**
None | priority | copious memory leaks during record and playback important read any spam issues will be deleted issues are not a place to go ask support questions please post confirmed bugs when possible please mark the checkbox below use x to fill the checkboxes example i have read the specified guidelines for issues describe the bug pull request introduced what could be a serious memory leak during recording and playback you would have known it soon enough in testing i did not test tenacity i knew it from inspection of the source code update actually it won t leak memory during recording unless you do overdub but it will happen during playback to reproduce record or generate some sound select it and play it back in a loop shift space open macos activity monitor or windows task manager or similar utility observe memory consumption of audacity write down that number go to bed wake up compare with current readout if the program is still running expected behavior i haven t actually done this experiment but i predict an unhappy surprise screenshots none additional information please complete the following information os all version since additional context none | 1 |
577,721 | 17,117,521,490 | IssuesEvent | 2021-07-11 17:06:47 | SmashMC-Development/Bugs-and-Issues | https://api.github.com/repos/SmashMC-Development/Bugs-and-Issues | closed | the modpack is broken,please fix. | high priority invalid pixelmon | **Describe the Bug**
people are not getting required mods/resource pack in newest version of texture pack
**To Reproduce**
Steps to reproduce the behavior:
Download newest version of modpack and see that reauth/ resource pack is missing
**Servers with the Bug**
the technic launcher
**Expected behavior**
it should have these things in order to give players rewards for getting our modpack!
should look like this:


**Screenshots**
people who download the newest version get this:


**Additional context**
@HackuJacku if this can be quick fixed or if there's another way to fix this please let me know!
| 1.0 | the modpack is broken,please fix. - **Describe the Bug**
people are not getting required mods/resource pack in newest version of texture pack
**To Reproduce**
Steps to reproduce the behavior:
Download newest version of modpack and see that reauth/ resource pack is missing
**Servers with the Bug**
the technic launcher
**Expected behavior**
it should have these things in order to give players rewards for getting our modpack!
should look like this:


**Screenshots**
people who download the newest version get this:


**Additional context**
@HackuJacku if this can be quick fixed or if there's another way to fix this please let me know!
| priority | the modpack is broken please fix describe the bug people are not getting required mods resource pack in newest version of texture pack to reproduce steps to reproduce the behavior download newest version of modpack and see that reauth resource pack is missing servers with the bug the technic launcher expected behavior it should have these things in order to give players rewards for getting our modpack should look like this screenshots people who download the newest version get this additional context hackujacku if this can be quick fixed or if there s another way to fix this please let me know | 1 |
305,380 | 9,368,527,181 | IssuesEvent | 2019-04-03 08:54:53 | cs2103-ay1819s2-t12-1/main | https://api.github.com/repos/cs2103-ay1819s2-t12-1/main | closed | Error in add booking command message. | priority.High | Should not show "with ." if there are no other users in booking and reservation. | 1.0 | Error in add booking command message. - Should not show "with ." if there are no other users in booking and reservation. | priority | error in add booking command message should not show with if there are no other users in booking and reservation | 1 |
533,727 | 15,597,683,804 | IssuesEvent | 2021-03-18 17:11:50 | AY2021S2-CS2113-F10-3/tp | https://api.github.com/repos/AY2021S2-CS2113-F10-3/tp | closed | As a user, I can add an employee to a particular shift | priority.High type.Story | so that I can schedule an available employee to work on that shift. | 1.0 | As a user, I can add an employee to a particular shift - so that I can schedule an available employee to work on that shift. | priority | as a user i can add an employee to a particular shift so that i can schedule an available employee to work on that shift | 1 |
367,823 | 10,861,857,184 | IssuesEvent | 2019-11-14 12:01:48 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | opened | ISSN 1743-4629 Crime Prevention and Community Safety Nachbearbeitung oder entfernen | high priority | https://link.springer.com/article/10.1057/s41300-019-00074-6
Beihaltet eine Korrektur eines anderen Aufsatzes. Im halbautomatischen Verfahren ist dies ein Fall, bei dem nachbearbeitet werden muss. Sofern es Sinn macht.
Bei den Onlineausgaben, so auch hier, oft nicht. Der Inhalt des Artikels auf den sich die Korrekturanweisung bezieht, wurde nachträglich korrigiert, so dass dieser Artikel in der korrekten Form im Netz steht.
Der Aufsatz mit der Korrekturangabe ist im Grunde genommen überflüssig. Man könnte ihn entfernen, da er zu Irritationen führen kann oder einfach nur Verärgerung auslösen könnte. Man kann ihn aber auch nur einfach so stehen lassen und damit etwaige unschöne Begleiterscheinungen in Kauf nehmen. Denn falsch ist es ja nicht, was drin steht.
Leider kann man auch hier keine allgemeine Regel bilden.
Das ist aber das einzige, was mir an dieser Zeitschrift auffällt. Bei dieser Zeitschrift kann man den Default Lieferweg einschalten. | 1.0 | ISSN 1743-4629 Crime Prevention and Community Safety Nachbearbeitung oder entfernen - https://link.springer.com/article/10.1057/s41300-019-00074-6
Beihaltet eine Korrektur eines anderen Aufsatzes. Im halbautomatischen Verfahren ist dies ein Fall, bei dem nachbearbeitet werden muss. Sofern es Sinn macht.
Bei den Onlineausgaben, so auch hier, oft nicht. Der Inhalt des Artikels auf den sich die Korrekturanweisung bezieht, wurde nachträglich korrigiert, so dass dieser Artikel in der korrekten Form im Netz steht.
Der Aufsatz mit der Korrekturangabe ist im Grunde genommen überflüssig. Man könnte ihn entfernen, da er zu Irritationen führen kann oder einfach nur Verärgerung auslösen könnte. Man kann ihn aber auch nur einfach so stehen lassen und damit etwaige unschöne Begleiterscheinungen in Kauf nehmen. Denn falsch ist es ja nicht, was drin steht.
Leider kann man auch hier keine allgemeine Regel bilden.
Das ist aber das einzige, was mir an dieser Zeitschrift auffällt. Bei dieser Zeitschrift kann man den Default Lieferweg einschalten. | priority | issn crime prevention and community safety nachbearbeitung oder entfernen beihaltet eine korrektur eines anderen aufsatzes im halbautomatischen verfahren ist dies ein fall bei dem nachbearbeitet werden muss sofern es sinn macht bei den onlineausgaben so auch hier oft nicht der inhalt des artikels auf den sich die korrekturanweisung bezieht wurde nachträglich korrigiert so dass dieser artikel in der korrekten form im netz steht der aufsatz mit der korrekturangabe ist im grunde genommen überflüssig man könnte ihn entfernen da er zu irritationen führen kann oder einfach nur verärgerung auslösen könnte man kann ihn aber auch nur einfach so stehen lassen und damit etwaige unschöne begleiterscheinungen in kauf nehmen denn falsch ist es ja nicht was drin steht leider kann man auch hier keine allgemeine regel bilden das ist aber das einzige was mir an dieser zeitschrift auffällt bei dieser zeitschrift kann man den default lieferweg einschalten | 1 |
51,015 | 3,009,983,652 | IssuesEvent | 2015-07-28 10:15:25 | HubTurbo/HubTurbo | https://api.github.com/repos/HubTurbo/HubTurbo | closed | load only used repos | aspect-performance feature-projects priority.high type.enhancement | Currently, I suspect all repos in the list are loaded even if they are not used in any of the existing filters.
Is that correct?
If that is the case, after loading a few big repos, the user will not be able to use HT even if his filters do not use those big repos.
We should load repos just-in-time i.e. only when needed for a filter. | 1.0 | load only used repos - Currently, I suspect all repos in the list are loaded even if they are not used in any of the existing filters.
Is that correct?
If that is the case, after loading a few big repos, the user will not be able to use HT even if his filters do not use those big repos.
We should load repos just-in-time i.e. only when needed for a filter. | priority | load only used repos currently i suspect all repos in the list are loaded even if they are not used in any of the existing filters is that correct if that is the case after loading a few big repos the user will not be able to use ht even if his filters do not use those big repos we should load repos just in time i e only when needed for a filter | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.