Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
359,258 | 10,667,397,475 | IssuesEvent | 2019-10-19 12:07:52 | AY1920S1-CS2113-T14-1/main | https://api.github.com/repos/AY1920S1-CS2113-T14-1/main | opened | Create various UI contexts | component.UI priority.High status.Ongoing type.Task | Contexts as documented in the [https://github.com/AY1920S1-CS2113-T14-1/main/blob/master/docs/UserGuide.adoc](User Guide)
1. Home
2. Patients
3. Treatment
4. Evidence
5. Investigation | 1.0 | Create various UI contexts - Contexts as documented in the [https://github.com/AY1920S1-CS2113-T14-1/main/blob/master/docs/UserGuide.adoc](User Guide)
1. Home
2. Patients
3. Treatment
4. Evidence
5. Investigation | priority | create various ui contexts contexts as documented in the user guide home patients treatment evidence investigation | 1 |
678,506 | 23,200,276,734 | IssuesEvent | 2022-08-01 20:40:40 | stormk539/CavemanCooking | https://api.github.com/repos/stormk539/CavemanCooking | opened | Start of Building System | Priority: Highest 8 | As the systems designer, I need to begin the building system so that the level designer can use my blueprints to lay out where the player is able to buy furniture for the store.
CoS:
- [ ] Chairs/Tables/Decor player's can buy are shown as transparent before they buy them
- [ ] Player can press e to buy them within range
- [ ] Once bought, the material switches to non-transparent | 1.0 | Start of Building System - As the systems designer, I need to begin the building system so that the level designer can use my blueprints to lay out where the player is able to buy furniture for the store.
CoS:
- [ ] Chairs/Tables/Decor player's can buy are shown as transparent before they buy them
- [ ] Player can press e to buy them within range
- [ ] Once bought, the material switches to non-transparent | priority | start of building system as the systems designer i need to begin the building system so that the level designer can use my blueprints to lay out where the player is able to buy furniture for the store cos chairs tables decor player s can buy are shown as transparent before they buy them player can press e to buy them within range once bought the material switches to non transparent | 1 |
600,867 | 18,360,525,822 | IssuesEvent | 2021-10-09 05:54:17 | AY2122S1-CS2103T-F11-1/tp | https://api.github.com/repos/AY2122S1-CS2103T-F11-1/tp | closed | Add rates field to student class | type.Task priority.High | As per title. Furthermore, need to integrate rates with
* add command
* edit command
* storage | 1.0 | Add rates field to student class - As per title. Furthermore, need to integrate rates with
* add command
* edit command
* storage | priority | add rates field to student class as per title furthermore need to integrate rates with add command edit command storage | 1 |
709,022 | 24,365,195,305 | IssuesEvent | 2022-10-03 14:40:53 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | reopened | overlapped messages | bug ui priority 1: high pixel-perfect-issues | # Bug Report
## Description
<img width="1010" alt="Screen Shot 2022-10-02 at 7 30 26 PM" src="https://user-images.githubusercontent.com/176720/193481135-cbffd304-fd76-4c95-a6d6-52c859b2802b.png">
| 1.0 | overlapped messages - # Bug Report
## Description
<img width="1010" alt="Screen Shot 2022-10-02 at 7 30 26 PM" src="https://user-images.githubusercontent.com/176720/193481135-cbffd304-fd76-4c95-a6d6-52c859b2802b.png">
| priority | overlapped messages bug report description img width alt screen shot at pm src | 1 |
376,515 | 11,148,043,871 | IssuesEvent | 2019-12-23 14:25:59 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to add a Filter in amp tree shaking for adding custom CSS. | NEXT UPDATE [Priority: HIGH] enhancement | Desc: Need to add a Filter in amp tree shaking for adding custom CSS so that we can add CSS for amp-bind classes. | 1.0 | Need to add a Filter in amp tree shaking for adding custom CSS. - Desc: Need to add a Filter in amp tree shaking for adding custom CSS so that we can add CSS for amp-bind classes. | priority | need to add a filter in amp tree shaking for adding custom css desc need to add a filter in amp tree shaking for adding custom css so that we can add css for amp bind classes | 1 |
414,385 | 12,102,756,506 | IssuesEvent | 2020-04-20 17:13:18 | iglance/iGlance | https://api.github.com/repos/iglance/iGlance | closed | Menubar items stop refreshing after sleeping | bug fixed with next update high priority | **Describe the bug🐛**
Netowork module displays 0KB/s up and down despite being online and browsing/watching videos/Zooming
**To Reproduce**
Not entirely sure - seems to have happened when i woke my computer from sleep or switched to VPN
**Expected behavior**
Shows traffic numbers
**Screenshots (Optional)**
<img width="96" alt="Screen Shot 2020-04-05 at 23 13 25" src="https://user-images.githubusercontent.com/23653752/78520292-0cb0cd00-7794-11ea-8725-360bfdd2f3c5.png">
**Desktop (please complete the following information):**
- MacOS Version: 10.15.4
- Version of the app: 2.0.4
**Log**
Couldn't log it, had logging off when it happened sadly, will try to replicate
| 1.0 | Menubar items stop refreshing after sleeping - **Describe the bug🐛**
Netowork module displays 0KB/s up and down despite being online and browsing/watching videos/Zooming
**To Reproduce**
Not entirely sure - seems to have happened when i woke my computer from sleep or switched to VPN
**Expected behavior**
Shows traffic numbers
**Screenshots (Optional)**
<img width="96" alt="Screen Shot 2020-04-05 at 23 13 25" src="https://user-images.githubusercontent.com/23653752/78520292-0cb0cd00-7794-11ea-8725-360bfdd2f3c5.png">
**Desktop (please complete the following information):**
- MacOS Version: 10.15.4
- Version of the app: 2.0.4
**Log**
Couldn't log it, had logging off when it happened sadly, will try to replicate
| priority | menubar items stop refreshing after sleeping describe the bug🐛 netowork module displays s up and down despite being online and browsing watching videos zooming to reproduce not entirely sure seems to have happened when i woke my computer from sleep or switched to vpn expected behavior shows traffic numbers screenshots optional img width alt screen shot at src desktop please complete the following information macos version version of the app log couldn t log it had logging off when it happened sadly will try to replicate | 1 |
749,740 | 26,177,492,052 | IssuesEvent | 2023-01-02 11:36:31 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | Update banners in the plugin | type: enhancement priority: high module: user interface | **Is your feature request related to a problem? Please describe.**
Based on the upcoming change we'll need to update our banners.
1. Expired banners: https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=110497832
2. Expiring banners:
https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=675951386
3. Upgrade banners:
https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=379622301
| 1.0 | Update banners in the plugin - **Is your feature request related to a problem? Please describe.**
Based on the upcoming change we'll need to update our banners.
1. Expired banners: https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=110497832
2. Expiring banners:
https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=675951386
3. Upgrade banners:
https://docs.google.com/spreadsheets/d/1Zu1MbsVNkOWCsW0CmVRuY69I0t3vtTLG4ykhuOI3YVI/edit#gid=379622301
| priority | update banners in the plugin is your feature request related to a problem please describe based on the upcoming change we ll need to update our banners expired banners expiring banners upgrade banners | 1 |
500,204 | 14,492,861,955 | IssuesEvent | 2020-12-11 07:41:57 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Validation error in author bio alt attribute | NEXT UPDATE [Priority: HIGH] bug | Users are getting Validation error in author bio alt attribute after plugin update version 1.0.70
WordPress ref: https://wordpress.org/support/topic/amp-error-24/ | 1.0 | Validation error in author bio alt attribute - Users are getting Validation error in author bio alt attribute after plugin update version 1.0.70
WordPress ref: https://wordpress.org/support/topic/amp-error-24/ | priority | validation error in author bio alt attribute users are getting validation error in author bio alt attribute after plugin update version wordpress ref | 1 |
796,641 | 28,121,422,753 | IssuesEvent | 2023-03-31 14:28:57 | NCC-CNC/wheretowork | https://api.github.com/repos/NCC-CNC/wheretowork | opened | renv::restore() on newer versions of R fails to install certain packages | bug high priority | WTW was built on R version `4.1.2 (2021-11-01) -- "Bird Hippie"`
New users have updated versions of R and when they go to clone WTW and `renv::restore()` a bunch of packages fail to install.
So far the list of failed package installs are: `gert`, `httpuv` , `igraph`, `rgdal`, `sf`, `xml2`, `curl`
| 1.0 | renv::restore() on newer versions of R fails to install certain packages - WTW was built on R version `4.1.2 (2021-11-01) -- "Bird Hippie"`
New users have updated versions of R and when they go to clone WTW and `renv::restore()` a bunch of packages fail to install.
So far the list of failed package installs are: `gert`, `httpuv` , `igraph`, `rgdal`, `sf`, `xml2`, `curl`
| priority | renv restore on newer versions of r fails to install certain packages wtw was built on r version bird hippie new users have updated versions of r and when they go to clone wtw and renv restore a bunch of packages fail to install so far the list of failed package installs are gert httpuv igraph rgdal sf curl | 1 |
655,519 | 21,693,642,098 | IssuesEvent | 2022-05-09 17:46:15 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | closed | BigQuery support | Difficulty: [3] Advanced Effort: [3] High Type: ★ Enhancement Priority: [3] High | ### Discussed in https://github.com/rich-iannone/pointblank/discussions/404
Thanks for your comment. I'd be happy to help test this, and thanks for the link. I'll if I can get it running with one of the public big query datasets, and report back to you how I fare.
---
<div type='discussions-op-text'>
<sup>Originally posted by **good-marketing** April 25, 2022</sup>
Hi,
Thanks for creating pointblank, it is a next level R package, in terms of documentation, approach and execution. [ Hat tip].
I work in GCP environment, and was wondering if support for BigQuery (https://bigrquery.r-dbi.org/) is something you might consider.
Cheers,
Bob</div> | 1.0 | BigQuery support - ### Discussed in https://github.com/rich-iannone/pointblank/discussions/404
Thanks for your comment. I'd be happy to help test this, and thanks for the link. I'll if I can get it running with one of the public big query datasets, and report back to you how I fare.
---
<div type='discussions-op-text'>
<sup>Originally posted by **good-marketing** April 25, 2022</sup>
Hi,
Thanks for creating pointblank, it is a next level R package, in terms of documentation, approach and execution. [ Hat tip].
I work in GCP environment, and was wondering if support for BigQuery (https://bigrquery.r-dbi.org/) is something you might consider.
Cheers,
Bob</div> | priority | bigquery support discussed in thanks for your comment i d be happy to help test this and thanks for the link i ll if i can get it running with one of the public big query datasets and report back to you how i fare originally posted by good marketing april hi thanks for creating pointblank it is a next level r package in terms of documentation approach and execution i work in gcp environment and was wondering if support for bigquery is something you might consider cheers bob | 1 |
338,502 | 10,230,166,315 | IssuesEvent | 2019-08-17 18:59:57 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | In Bonus Round, the Poison rules are only considered if there's enough Curse-based enemies | High Priority reported in launch v1.0 | 
From `remixrules.txt`. Should be a simple fix - change `if(cursecount >=2)` to `if(poisoncount >=2)`. | 1.0 | In Bonus Round, the Poison rules are only considered if there's enough Curse-based enemies - 
From `remixrules.txt`. Should be a simple fix - change `if(cursecount >=2)` to `if(poisoncount >=2)`. | priority | in bonus round the poison rules are only considered if there s enough curse based enemies from remixrules txt should be a simple fix change if cursecount to if poisoncount | 1 |
104,648 | 4,216,646,605 | IssuesEvent | 2016-06-30 10:00:07 | CloudOpting/cloudopting-manager | https://api.github.com/repos/CloudOpting/cloudopting-manager | closed | Platform independence | bug high priority | The software should be able to run on all platforms and have no dependency on OS or J2EE environment. | 1.0 | Platform independence - The software should be able to run on all platforms and have no dependency on OS or J2EE environment. | priority | platform independence the software should be able to run on all platforms and have no dependency on os or environment | 1 |
368,055 | 10,865,428,173 | IssuesEvent | 2019-11-14 18:58:47 | pytorch/xla | https://api.github.com/repos/pytorch/xla | closed | Remove extra overrides for non-leaf nodes. | enhancement high priority | Note that when removing extra overrides, we need to make sure their leaf part is added if not present. For example, `arange` should be removed but we should add `arange_out`.
For reference here's source of truth of all leaf nodes as of 10/21/2019:
https://gist.github.com/ailzhang/905aa41a5be8c5315ba0e3d8f0d0e326
- [x] (Ailing)Tensor bitwise_not(const Tensor& self); // bitwise_not(Tensor)->Tensor
- [x] (Ailing)Tensor& bitwise_not_(Tensor& self); // bitwise_not_(Tensor)->Tensor
- [x] (Ailing) Tensor _cast_Byte(const Tensor& self, bool non_blocking); // _cast_Byte(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Char(const Tensor& self, bool non_blocking); // _cast_Char(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Float(const Tensor& self, bool non_blocking); // _cast_Float(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Int(const Tensor& self, bool non_blocking); // _cast_Int(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Long(const Tensor& self, bool non_blocking); // _cast_Long(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Short(const Tensor& self, bool non_blocking); // _cast_Short(Tensor,bool)->Tensor
- [x] (Ailing)Tensor _dim_arange(const Tensor& like, int64_t dim); // _dim_arange(Tensor,int64_t)->Tensor
- [x] (Ailing) Tensor argmax(const Tensor& self, c10::optional<int64_t> dim, bool keepdim); // argmax(Tensor,c10::optional<int64_t>,bool)->Tensor
- [x] (Ailing) Tensor argmin(const Tensor& self, c10::optional<int64_t> dim, bool keepdim); // argmin(Tensor,c10::optional<int64_t>,bool)->Tensor
- [x] (Ailing) Tensor argsort(const Tensor& self, int64_t dim, bool descending); // argsort(Tensor,int64_t,bool)->Tensor
- [x] (Ailing) Tensor avg_pool1d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, bool ceil_mode, bool count_include_pad); // avg_pool1d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,bool,bool)->Tensor
- [x] (Ailing) Tensor batch_norm(const Tensor& input, const Tensor& weight, const Tensor& bias, const Tensor& running_mean, const Tensor& running_var, bool training, double momentum, double eps, bool cudnn_enabled); // batch_norm(Tensor,Tensor,Tensor,Tensor,Tensor,bool,double,double,bool)->Tensor
- [x] (Ailing) Tensor bernoulli(const Tensor& self, double p, Generator* generator); // bernoulli(Tensor,double,Generator)->Tensor
- [x] (Ailing) Tensor bilinear(const Tensor& input1, const Tensor& input2, const Tensor& weight, const Tensor& bias); // bilinear(Tensor,Tensor,Tensor,Tensor)->Tensor
- [x] (Ailing) Tensor binary_cross_entropy_with_logits_backward( const Tensor& grad_output, const Tensor& self, const Tensor& target, const Tensor& weight, const Tensor& pos_weight, int64_t reduction); // binary_cross_entropy_with_logits_backward(Tensor,Tensor,Tensor,Tensor,Tensor,int64_t)->Tensor
- [x] (Ailing) std::vector<Tensor> broadcast_tensors(TensorList tensors); // broadcast_tensors(TensorList)->std::vector<Tensor>
- [x] (Ailing) Tensor celu(const Tensor& self, Scalar alpha); // celu(Tensor,Scalar)->Tensor
- [x] (Ailing) Tensor& celu_(Tensor& self, Scalar alpha); // celu_(Tensor,Scalar)->Tensor
- [x] (Ailing) Tensor chain_matmul(TensorList matrices); // chain_matmul(TensorList)->Tensor
- [x] (Ailing) Tensor contiguous(const Tensor& self, MemoryFormat memory_format); // contiguous(Tensor,MemoryFormat)->Tensor
- [ ] (Ailing) Tensor& copy_(Tensor& self, const Tensor& src, bool non_blocking); // copy_(Tensor,Tensor,bool)->Tensor
- [x] (Ailing) Tensor cosine_embedding_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction); // cosine_embedding_loss(Tensor,Tensor,Tensor,double,int64_t)->Tensor
- [x] (Ailing) Tensor cosine_similarity(const Tensor& x1, const Tensor& x2, int64_t dim, double eps); // cosine_similarity(Tensor,Tensor,int64_t,double)->Tensor
- [x] (Ailing) Tensor diagflat(const Tensor& self, int64_t offset); // diagflat(Tensor,int64_t)->Tensor
- [x] (Ailing) Tensor dropout(const Tensor& input, double p, bool train); // dropout(Tensor,double,bool)->Tensor
- [x] (Ailing) Tensor& dropout_(Tensor& self, double p, bool train); // dropout_(Tensor,double,bool)->Tensor
- [ ] Tensor einsum(std::string equation, TensorList tensors); // einsum(std::string,TensorList)->Tensor
- [x] Tensor empty_like(const Tensor& self); // empty_like(Tensor)->Tensor
- [x] Tensor expand_as(const Tensor& self, const Tensor& other); // expand_as(Tensor,Tensor)->Tensor
- [x] Tensor flatten(const Tensor& self, int64_t start_dim, int64_t end_dim); // flatten(Tensor,int64_t,int64_t)->Tensor
- [x] Tensor frobenius_norm(const Tensor& self); // frobenius_norm(Tensor)->Tensor
- [x] Tensor frobenius_norm(const Tensor& self, IntArrayRef dim, bool keepdim); // frobenius_norm(Tensor,IntArrayRef,bool)->Tensor
- [x] Tensor full_like(const Tensor& self, Scalar fill_value); // full_like(Tensor,Scalar)->Tensor
- [x] Tensor group_norm(const Tensor& input, int64_t num_groups, const Tensor& weight, const Tensor& bias, double eps, bool cudnn_enabled); // group_norm(Tensor,int64_t,Tensor,Tensor,double,bool)->Tensor
- [x] Tensor hinge_embedding_loss(const Tensor& self, const Tensor& target, double margin, int64_t reduction); // hinge_embedding_loss(Tensor,Tensor,double,int64_t)->Tensor
- [x] Tensor index_add(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& source); // index_add(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_copy(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& source); // index_copy(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_fill(const Tensor& self, int64_t dim, const Tensor& index, Scalar value); // index_fill(Tensor,int64_t,Tensor,Scalar)->Tensor
- [x] Tensor index_fill(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& value); // index_fill(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_put(const Tensor& self, TensorList indices, const Tensor& values, bool accumulate); // index_put(Tensor,TensorList,Tensor,bool)->Tensor
- [x] Tensor instance_norm( const Tensor& input, const Tensor& weight, const Tensor& bias, const Tensor& running_mean, const Tensor& running_var, bool use_input_stats, double momentum, double eps, bool cudnn_enabled); // instance_norm(Tensor,Tensor,Tensor,Tensor,Tensor,bool,double,double,bool)->Tensor
- [x] bool is_floating_point(const Tensor& self); // is_floating_point(Tensor)->bool
- [x] bool is_signed(const Tensor& self); // is_signed(Tensor)->bool
- [x] Tensor layer_norm(const Tensor& input, IntArrayRef normalized_shape, const Tensor& weight, const Tensor& bias, double eps, bool cudnn_enable); // layer_norm(Tensor,IntArrayRef,Tensor,Tensor,double,bool)->Tensor
- [x] Tensor linear(const Tensor& input, const Tensor& weight, const Tensor& bias); // linear(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor log_sigmoid(const Tensor& self); // log_sigmoid(Tensor)->Tensor
- [x] Tensor log_softmax(const Tensor& self, int64_t dim, c10::optional<ScalarType> dtype); // log_softmax(Tensor,int64_t,c10::optional<ScalarType>)->Tensor
- [x] Tensor margin_ranking_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction); // margin_ranking_loss(Tensor,Tensor,Tensor,double,int64_t)->Tensor
- [x] Tensor masked_fill(const Tensor& self, const Tensor& mask, Scalar value); // masked_fill(Tensor,Tensor,Scalar)->Tensor
- [x] Tensor masked_fill(const Tensor& self, const Tensor& mask, const Tensor& value); // masked_fill(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor matmul(const Tensor& self, const Tensor& other); // matmul(Tensor,Tensor)->Tensor
- [x] Tensor max_pool1d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool1d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] Tensor max_pool2d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool2d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] Tensor max_pool3d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool3d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] std::vector<Tensor> meshgrid(TensorList tensors); // meshgrid(TensorList)->std::vector<Tensor>
- [x] Tensor narrow(const Tensor& self, int64_t dim, int64_t start, int64_t length); // narrow(Tensor,int64_t,int64_t,int64_t)->Tensor
- [x] Tensor nll_loss(const Tensor& self, const Tensor& target, const Tensor& weight, int64_t reduction, int64_t ignore_index); // nll_loss(Tensor,Tensor,Tensor,int64_t,int64_t)->Tensor
- [x] Tensor nuclear_norm(const Tensor& self, bool keepdim); // nuclear_norm(Tensor,bool)->Tensor
- [x] Tensor one_hot(const Tensor& self, int64_t num_classes); // one_hot(Tensor,int64_t)->Tensor
- [x] Tensor ones_like(const Tensor& self); // ones_like(Tensor)->Tensor
- [x] Tensor pairwise_distance(const Tensor& x1, const Tensor& x2, double p, double eps, bool keepdim); // pairwise_distance(Tensor,Tensor,double,double,bool)->Tensor
- [x] Tensor pixel_shuffle(const Tensor& self, int64_t upscale_factor); // pixel_shuffle(Tensor,int64_t)->Tensor
- [x] Tensor pinverse(const Tensor& self, double rcond); // pinverse(Tensor,double)->Tensor
- [x] Tensor reshape(const Tensor& self, IntArrayRef shape); // reshape(Tensor,IntArrayRef)->Tensor
- [x] Tensor scatter(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& src); // scatter(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor scatter(const Tensor& self, int64_t dim, const Tensor& index, Scalar value); // scatter(Tensor,int64_t,Tensor,Scalar)->Tensor
- [x] Tensor scatter_add(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& src); // scatter_add(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor selu(const Tensor& self); // selu(Tensor)->Tensor
- [x] Tensor& selu_(Tensor& self); // selu_(Tensor)->Tensor
- [x] int64_t size(const Tensor& self, int64_t dim); // size(Tensor,int64_t)->int64_t
- [x] Tensor softmax(const Tensor& self, int64_t dim, c10::optional<ScalarType> dtype); // softmax(Tensor,int64_t,c10::optional<ScalarType>)->Tensor
- [x] Tensor sum_to_size(const Tensor& self, IntArrayRef size); // sum_to_size(Tensor,IntArrayRef)->Tensor
- [x] Tensor tensordot(const Tensor& self, const Tensor& other, IntArrayRef dims_self, IntArrayRef dims_other); // tensordot(Tensor,Tensor,IntArrayRef,IntArrayRef)->Tensor
- [x] Tensor to(const Tensor& self, const TensorOptions& options, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,TensorOptions,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, Device device, ScalarType dtype, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,Device,ScalarType,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, ScalarType dtype, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,ScalarType,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, const Tensor& other, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,Tensor,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor triplet_margin_loss(const Tensor& anchor, const Tensor& positive, const Tensor& negative, double margin, double p, double eps, bool swap, int64_t reduction); // triplet_margin_loss(Tensor,Tensor,Tensor,double,double,double,bool,int64_t)->Tensor
- [x] Tensor view_as(const Tensor& self, const Tensor& other); // view_as(Tensor,Tensor)->Tensor
- [x] Tensor where(const Tensor& condition, const Tensor& self, const Tensor& other); // where(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor zeros_like(const Tensor& self); // zeros_like(Tensor)->Tensor
And a few more (non-leaf factory functions)
- [x] Tensor arange(Scalar end, const TensorOptions& options); // arange(Scalar,TensorOptions)->Tensor
- [x] Tensor arange(Scalar start, Scalar end, const TensorOptions& options); // arange(Scalar,Scalar,TensorOptions)->Tensor
- [x] Tensor arange(Scalar start, Scalar end, Scalar step, const TensorOptions& options); // arange(Scalar,Scalar,Scalar,TensorOptions)->Tensor
- [x] Tensor bartlett_window(int64_t window_length, const TensorOptions& options); // bartlett_window(int64_t,TensorOptions)->Tensor
- [x] Tensor bartlett_window(int64_t window_length, bool periodic, const TensorOptions& options); // bartlett_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor blackman_window(int64_t window_length, const TensorOptions& options); // blackman_window(int64_t,TensorOptions)->Tensor
- [x] Tensor blackman_window(int64_t window_length, bool periodic, const TensorOptions& options); // blackman_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor empty_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // empty_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor eye(int64_t n, const TensorOptions& options); // eye(int64_t,TensorOptions)->Tensor
- [x] Tensor eye(int64_t n, int64_t m, const TensorOptions& options); // eye(int64_t,int64_t,TensorOptions)->Tensor
- [x] Tensor full(IntArrayRef size, Scalar fill_value, const TensorOptions& options); // full(IntArrayRef,Scalar,TensorOptions)->Tensor
- [x] Tensor full_like(const Tensor& self, Scalar fill_value, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // full_like(Tensor,Scalar,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor hamming_window(int64_t window_length, const TensorOptions& options); // hamming_window(int64_t,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, const TensorOptions& options); // hamming_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, double alpha, const TensorOptions& options); // hamming_window(int64_t,bool,double,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, double alpha, double beta, const TensorOptions& options); // hamming_window(int64_t,bool,double,double,TensorOptions)->Tensor
- [x] Tensor hann_window(int64_t window_length, const TensorOptions& options); // hann_window(int64_t,TensorOptions)->Tensor
- [x] Tensor hann_window(int64_t window_length, bool periodic, const TensorOptions& options); // hann_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor ones(IntArrayRef size, const TensorOptions& options); // ones(IntArrayRef,TensorOptions)->Tensor
- [x] Tensor ones_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // ones_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor randperm(int64_t n, const TensorOptions& options); // randperm(int64_t,TensorOptions)->Tensor
- [x] Tensor randperm(int64_t n, Generator* generator, const TensorOptions& options); // randperm(int64_t,Generator,TensorOptions)->Tensor
- [x] Tensor zeros_like(const Tensor& self, c10::optional<MemoryFormat> memory_format); // zeros_like(Tensor,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor zeros_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // zeros_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
| 1.0 | Remove extra overrides for non-leaf nodes. - Note that when removing extra overrides, we need to make sure their leaf part is added if not present. For example, `arange` should be removed but we should add `arange_out`.
For reference here's source of truth of all leaf nodes as of 10/21/2019:
https://gist.github.com/ailzhang/905aa41a5be8c5315ba0e3d8f0d0e326
- [x] (Ailing)Tensor bitwise_not(const Tensor& self); // bitwise_not(Tensor)->Tensor
- [x] (Ailing)Tensor& bitwise_not_(Tensor& self); // bitwise_not_(Tensor)->Tensor
- [x] (Ailing) Tensor _cast_Byte(const Tensor& self, bool non_blocking); // _cast_Byte(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Char(const Tensor& self, bool non_blocking); // _cast_Char(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Float(const Tensor& self, bool non_blocking); // _cast_Float(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Int(const Tensor& self, bool non_blocking); // _cast_Int(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Long(const Tensor& self, bool non_blocking); // _cast_Long(Tensor,bool)->Tensor
- [x] (Ailing) Tensor _cast_Short(const Tensor& self, bool non_blocking); // _cast_Short(Tensor,bool)->Tensor
- [x] (Ailing)Tensor _dim_arange(const Tensor& like, int64_t dim); // _dim_arange(Tensor,int64_t)->Tensor
- [x] (Ailing) Tensor argmax(const Tensor& self, c10::optional<int64_t> dim, bool keepdim); // argmax(Tensor,c10::optional<int64_t>,bool)->Tensor
- [x] (Ailing) Tensor argmin(const Tensor& self, c10::optional<int64_t> dim, bool keepdim); // argmin(Tensor,c10::optional<int64_t>,bool)->Tensor
- [x] (Ailing) Tensor argsort(const Tensor& self, int64_t dim, bool descending); // argsort(Tensor,int64_t,bool)->Tensor
- [x] (Ailing) Tensor avg_pool1d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, bool ceil_mode, bool count_include_pad); // avg_pool1d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,bool,bool)->Tensor
- [x] (Ailing) Tensor batch_norm(const Tensor& input, const Tensor& weight, const Tensor& bias, const Tensor& running_mean, const Tensor& running_var, bool training, double momentum, double eps, bool cudnn_enabled); // batch_norm(Tensor,Tensor,Tensor,Tensor,Tensor,bool,double,double,bool)->Tensor
- [x] (Ailing) Tensor bernoulli(const Tensor& self, double p, Generator* generator); // bernoulli(Tensor,double,Generator)->Tensor
- [x] (Ailing) Tensor bilinear(const Tensor& input1, const Tensor& input2, const Tensor& weight, const Tensor& bias); // bilinear(Tensor,Tensor,Tensor,Tensor)->Tensor
- [x] (Ailing) Tensor binary_cross_entropy_with_logits_backward( const Tensor& grad_output, const Tensor& self, const Tensor& target, const Tensor& weight, const Tensor& pos_weight, int64_t reduction); // binary_cross_entropy_with_logits_backward(Tensor,Tensor,Tensor,Tensor,Tensor,int64_t)->Tensor
- [x] (Ailing) std::vector<Tensor> broadcast_tensors(TensorList tensors); // broadcast_tensors(TensorList)->std::vector<Tensor>
- [x] (Ailing) Tensor celu(const Tensor& self, Scalar alpha); // celu(Tensor,Scalar)->Tensor
- [x] (Ailing) Tensor& celu_(Tensor& self, Scalar alpha); // celu_(Tensor,Scalar)->Tensor
- [x] (Ailing) Tensor chain_matmul(TensorList matrices); // chain_matmul(TensorList)->Tensor
- [x] (Ailing) Tensor contiguous(const Tensor& self, MemoryFormat memory_format); // contiguous(Tensor,MemoryFormat)->Tensor
- [ ] (Ailing) Tensor& copy_(Tensor& self, const Tensor& src, bool non_blocking); // copy_(Tensor,Tensor,bool)->Tensor
- [x] (Ailing) Tensor cosine_embedding_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction); // cosine_embedding_loss(Tensor,Tensor,Tensor,double,int64_t)->Tensor
- [x] (Ailing) Tensor cosine_similarity(const Tensor& x1, const Tensor& x2, int64_t dim, double eps); // cosine_similarity(Tensor,Tensor,int64_t,double)->Tensor
- [x] (Ailing) Tensor diagflat(const Tensor& self, int64_t offset); // diagflat(Tensor,int64_t)->Tensor
- [x] (Ailing) Tensor dropout(const Tensor& input, double p, bool train); // dropout(Tensor,double,bool)->Tensor
- [x] (Ailing) Tensor& dropout_(Tensor& self, double p, bool train); // dropout_(Tensor,double,bool)->Tensor
- [ ] Tensor einsum(std::string equation, TensorList tensors); // einsum(std::string,TensorList)->Tensor
- [x] Tensor empty_like(const Tensor& self); // empty_like(Tensor)->Tensor
- [x] Tensor expand_as(const Tensor& self, const Tensor& other); // expand_as(Tensor,Tensor)->Tensor
- [x] Tensor flatten(const Tensor& self, int64_t start_dim, int64_t end_dim); // flatten(Tensor,int64_t,int64_t)->Tensor
- [x] Tensor frobenius_norm(const Tensor& self); // frobenius_norm(Tensor)->Tensor
- [x] Tensor frobenius_norm(const Tensor& self, IntArrayRef dim, bool keepdim); // frobenius_norm(Tensor,IntArrayRef,bool)->Tensor
- [x] Tensor full_like(const Tensor& self, Scalar fill_value); // full_like(Tensor,Scalar)->Tensor
- [x] Tensor group_norm(const Tensor& input, int64_t num_groups, const Tensor& weight, const Tensor& bias, double eps, bool cudnn_enabled); // group_norm(Tensor,int64_t,Tensor,Tensor,double,bool)->Tensor
- [x] Tensor hinge_embedding_loss(const Tensor& self, const Tensor& target, double margin, int64_t reduction); // hinge_embedding_loss(Tensor,Tensor,double,int64_t)->Tensor
- [x] Tensor index_add(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& source); // index_add(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_copy(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& source); // index_copy(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_fill(const Tensor& self, int64_t dim, const Tensor& index, Scalar value); // index_fill(Tensor,int64_t,Tensor,Scalar)->Tensor
- [x] Tensor index_fill(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& value); // index_fill(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor index_put(const Tensor& self, TensorList indices, const Tensor& values, bool accumulate); // index_put(Tensor,TensorList,Tensor,bool)->Tensor
- [x] Tensor instance_norm( const Tensor& input, const Tensor& weight, const Tensor& bias, const Tensor& running_mean, const Tensor& running_var, bool use_input_stats, double momentum, double eps, bool cudnn_enabled); // instance_norm(Tensor,Tensor,Tensor,Tensor,Tensor,bool,double,double,bool)->Tensor
- [x] bool is_floating_point(const Tensor& self); // is_floating_point(Tensor)->bool
- [x] bool is_signed(const Tensor& self); // is_signed(Tensor)->bool
- [x] Tensor layer_norm(const Tensor& input, IntArrayRef normalized_shape, const Tensor& weight, const Tensor& bias, double eps, bool cudnn_enable); // layer_norm(Tensor,IntArrayRef,Tensor,Tensor,double,bool)->Tensor
- [x] Tensor linear(const Tensor& input, const Tensor& weight, const Tensor& bias); // linear(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor log_sigmoid(const Tensor& self); // log_sigmoid(Tensor)->Tensor
- [x] Tensor log_softmax(const Tensor& self, int64_t dim, c10::optional<ScalarType> dtype); // log_softmax(Tensor,int64_t,c10::optional<ScalarType>)->Tensor
- [x] Tensor margin_ranking_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction); // margin_ranking_loss(Tensor,Tensor,Tensor,double,int64_t)->Tensor
- [x] Tensor masked_fill(const Tensor& self, const Tensor& mask, Scalar value); // masked_fill(Tensor,Tensor,Scalar)->Tensor
- [x] Tensor masked_fill(const Tensor& self, const Tensor& mask, const Tensor& value); // masked_fill(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor matmul(const Tensor& self, const Tensor& other); // matmul(Tensor,Tensor)->Tensor
- [x] Tensor max_pool1d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool1d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] Tensor max_pool2d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool2d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] Tensor max_pool3d(const Tensor& self, IntArrayRef kernel_size, IntArrayRef stride, IntArrayRef padding, IntArrayRef dilation, bool ceil_mode); // max_pool3d(Tensor,IntArrayRef,IntArrayRef,IntArrayRef,IntArrayRef,bool)->Tensor
- [x] std::vector<Tensor> meshgrid(TensorList tensors); // meshgrid(TensorList)->std::vector<Tensor>
- [x] Tensor narrow(const Tensor& self, int64_t dim, int64_t start, int64_t length); // narrow(Tensor,int64_t,int64_t,int64_t)->Tensor
- [x] Tensor nll_loss(const Tensor& self, const Tensor& target, const Tensor& weight, int64_t reduction, int64_t ignore_index); // nll_loss(Tensor,Tensor,Tensor,int64_t,int64_t)->Tensor
- [x] Tensor nuclear_norm(const Tensor& self, bool keepdim); // nuclear_norm(Tensor,bool)->Tensor
- [x] Tensor one_hot(const Tensor& self, int64_t num_classes); // one_hot(Tensor,int64_t)->Tensor
- [x] Tensor ones_like(const Tensor& self); // ones_like(Tensor)->Tensor
- [x] Tensor pairwise_distance(const Tensor& x1, const Tensor& x2, double p, double eps, bool keepdim); // pairwise_distance(Tensor,Tensor,double,double,bool)->Tensor
- [x] Tensor pixel_shuffle(const Tensor& self, int64_t upscale_factor); // pixel_shuffle(Tensor,int64_t)->Tensor
- [x] Tensor pinverse(const Tensor& self, double rcond); // pinverse(Tensor,double)->Tensor
- [x] Tensor reshape(const Tensor& self, IntArrayRef shape); // reshape(Tensor,IntArrayRef)->Tensor
- [x] Tensor scatter(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& src); // scatter(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor scatter(const Tensor& self, int64_t dim, const Tensor& index, Scalar value); // scatter(Tensor,int64_t,Tensor,Scalar)->Tensor
- [x] Tensor scatter_add(const Tensor& self, int64_t dim, const Tensor& index, const Tensor& src); // scatter_add(Tensor,int64_t,Tensor,Tensor)->Tensor
- [x] Tensor selu(const Tensor& self); // selu(Tensor)->Tensor
- [x] Tensor& selu_(Tensor& self); // selu_(Tensor)->Tensor
- [x] int64_t size(const Tensor& self, int64_t dim); // size(Tensor,int64_t)->int64_t
- [x] Tensor softmax(const Tensor& self, int64_t dim, c10::optional<ScalarType> dtype); // softmax(Tensor,int64_t,c10::optional<ScalarType>)->Tensor
- [x] Tensor sum_to_size(const Tensor& self, IntArrayRef size); // sum_to_size(Tensor,IntArrayRef)->Tensor
- [x] Tensor tensordot(const Tensor& self, const Tensor& other, IntArrayRef dims_self, IntArrayRef dims_other); // tensordot(Tensor,Tensor,IntArrayRef,IntArrayRef)->Tensor
- [x] Tensor to(const Tensor& self, const TensorOptions& options, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,TensorOptions,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, Device device, ScalarType dtype, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,Device,ScalarType,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, ScalarType dtype, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,ScalarType,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor to(const Tensor& self, const Tensor& other, bool non_blocking, bool copy, c10::optional<MemoryFormat> memory_format); // to(Tensor,Tensor,bool,bool,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor triplet_margin_loss(const Tensor& anchor, const Tensor& positive, const Tensor& negative, double margin, double p, double eps, bool swap, int64_t reduction); // triplet_margin_loss(Tensor,Tensor,Tensor,double,double,double,bool,int64_t)->Tensor
- [x] Tensor view_as(const Tensor& self, const Tensor& other); // view_as(Tensor,Tensor)->Tensor
- [x] Tensor where(const Tensor& condition, const Tensor& self, const Tensor& other); // where(Tensor,Tensor,Tensor)->Tensor
- [x] Tensor zeros_like(const Tensor& self); // zeros_like(Tensor)->Tensor
And a few more (non-leaf factory functions)
- [x] Tensor arange(Scalar end, const TensorOptions& options); // arange(Scalar,TensorOptions)->Tensor
- [x] Tensor arange(Scalar start, Scalar end, const TensorOptions& options); // arange(Scalar,Scalar,TensorOptions)->Tensor
- [x] Tensor arange(Scalar start, Scalar end, Scalar step, const TensorOptions& options); // arange(Scalar,Scalar,Scalar,TensorOptions)->Tensor
- [x] Tensor bartlett_window(int64_t window_length, const TensorOptions& options); // bartlett_window(int64_t,TensorOptions)->Tensor
- [x] Tensor bartlett_window(int64_t window_length, bool periodic, const TensorOptions& options); // bartlett_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor blackman_window(int64_t window_length, const TensorOptions& options); // blackman_window(int64_t,TensorOptions)->Tensor
- [x] Tensor blackman_window(int64_t window_length, bool periodic, const TensorOptions& options); // blackman_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor empty_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // empty_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor eye(int64_t n, const TensorOptions& options); // eye(int64_t,TensorOptions)->Tensor
- [x] Tensor eye(int64_t n, int64_t m, const TensorOptions& options); // eye(int64_t,int64_t,TensorOptions)->Tensor
- [x] Tensor full(IntArrayRef size, Scalar fill_value, const TensorOptions& options); // full(IntArrayRef,Scalar,TensorOptions)->Tensor
- [x] Tensor full_like(const Tensor& self, Scalar fill_value, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // full_like(Tensor,Scalar,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor hamming_window(int64_t window_length, const TensorOptions& options); // hamming_window(int64_t,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, const TensorOptions& options); // hamming_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, double alpha, const TensorOptions& options); // hamming_window(int64_t,bool,double,TensorOptions)->Tensor
- [x] Tensor hamming_window(int64_t window_length, bool periodic, double alpha, double beta, const TensorOptions& options); // hamming_window(int64_t,bool,double,double,TensorOptions)->Tensor
- [x] Tensor hann_window(int64_t window_length, const TensorOptions& options); // hann_window(int64_t,TensorOptions)->Tensor
- [x] Tensor hann_window(int64_t window_length, bool periodic, const TensorOptions& options); // hann_window(int64_t,bool,TensorOptions)->Tensor
- [x] Tensor ones(IntArrayRef size, const TensorOptions& options); // ones(IntArrayRef,TensorOptions)->Tensor
- [x] Tensor ones_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // ones_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor randperm(int64_t n, const TensorOptions& options); // randperm(int64_t,TensorOptions)->Tensor
- [x] Tensor randperm(int64_t n, Generator* generator, const TensorOptions& options); // randperm(int64_t,Generator,TensorOptions)->Tensor
- [x] Tensor zeros_like(const Tensor& self, c10::optional<MemoryFormat> memory_format); // zeros_like(Tensor,c10::optional<MemoryFormat>)->Tensor
- [x] Tensor zeros_like(const Tensor& self, const TensorOptions& options, c10::optional<MemoryFormat> memory_format); // zeros_like(Tensor,TensorOptions,c10::optional<MemoryFormat>)->Tensor
| priority | remove extra overrides for non leaf nodes note that when removing extra overrides we need to make sure their leaf part is added if not present for example arange should be removed but we should add arange out for reference here s source of truth of all leaf nodes as of ailing tensor bitwise not const tensor self bitwise not tensor tensor ailing tensor bitwise not tensor self bitwise not tensor tensor ailing tensor cast byte const tensor self bool non blocking cast byte tensor bool tensor ailing tensor cast char const tensor self bool non blocking cast char tensor bool tensor ailing tensor cast float const tensor self bool non blocking cast float tensor bool tensor ailing tensor cast int const tensor self bool non blocking cast int tensor bool tensor ailing tensor cast long const tensor self bool non blocking cast long tensor bool tensor ailing tensor cast short const tensor self bool non blocking cast short tensor bool tensor ailing tensor dim arange const tensor like t dim dim arange tensor t tensor ailing tensor argmax const tensor self optional dim bool keepdim argmax tensor optional bool tensor ailing tensor argmin const tensor self optional dim bool keepdim argmin tensor optional bool tensor ailing tensor argsort const tensor self t dim bool descending argsort tensor t bool tensor ailing tensor avg const tensor self intarrayref kernel size intarrayref stride intarrayref padding bool ceil mode bool count include pad avg tensor intarrayref intarrayref intarrayref bool bool tensor ailing tensor batch norm const tensor input const tensor weight const tensor bias const tensor running mean const tensor running var bool training double momentum double eps bool cudnn enabled batch norm tensor tensor tensor tensor tensor bool double double bool tensor ailing tensor bernoulli const tensor self double p generator generator bernoulli tensor double generator tensor ailing tensor bilinear const tensor const tensor const tensor weight const tensor bias bilinear tensor tensor tensor tensor tensor ailing tensor binary cross entropy with logits backward const tensor grad output const tensor self const tensor target const tensor weight const tensor pos weight t reduction binary cross entropy with logits backward tensor tensor tensor tensor tensor t tensor ailing std vector broadcast tensors tensorlist tensors broadcast tensors tensorlist std vector ailing tensor celu const tensor self scalar alpha celu tensor scalar tensor ailing tensor celu tensor self scalar alpha celu tensor scalar tensor ailing tensor chain matmul tensorlist matrices chain matmul tensorlist tensor ailing tensor contiguous const tensor self memoryformat memory format contiguous tensor memoryformat tensor ailing tensor copy tensor self const tensor src bool non blocking copy tensor tensor bool tensor ailing tensor cosine embedding loss const tensor const tensor const tensor target double margin t reduction cosine embedding loss tensor tensor tensor double t tensor ailing tensor cosine similarity const tensor const tensor t dim double eps cosine similarity tensor tensor t double tensor ailing tensor diagflat const tensor self t offset diagflat tensor t tensor ailing tensor dropout const tensor input double p bool train dropout tensor double bool tensor ailing tensor dropout tensor self double p bool train dropout tensor double bool tensor tensor einsum std string equation tensorlist tensors einsum std string tensorlist tensor tensor empty like const tensor self empty like tensor tensor tensor expand as const tensor self const tensor other expand as tensor tensor tensor tensor flatten const tensor self t start dim t end dim flatten tensor t t tensor tensor frobenius norm const tensor self frobenius norm tensor tensor tensor frobenius norm const tensor self intarrayref dim bool keepdim frobenius norm tensor intarrayref bool tensor tensor full like const tensor self scalar fill value full like tensor scalar tensor tensor group norm const tensor input t num groups const tensor weight const tensor bias double eps bool cudnn enabled group norm tensor t tensor tensor double bool tensor tensor hinge embedding loss const tensor self const tensor target double margin t reduction hinge embedding loss tensor tensor double t tensor tensor index add const tensor self t dim const tensor index const tensor source index add tensor t tensor tensor tensor tensor index copy const tensor self t dim const tensor index const tensor source index copy tensor t tensor tensor tensor tensor index fill const tensor self t dim const tensor index scalar value index fill tensor t tensor scalar tensor tensor index fill const tensor self t dim const tensor index const tensor value index fill tensor t tensor tensor tensor tensor index put const tensor self tensorlist indices const tensor values bool accumulate index put tensor tensorlist tensor bool tensor tensor instance norm const tensor input const tensor weight const tensor bias const tensor running mean const tensor running var bool use input stats double momentum double eps bool cudnn enabled instance norm tensor tensor tensor tensor tensor bool double double bool tensor bool is floating point const tensor self is floating point tensor bool bool is signed const tensor self is signed tensor bool tensor layer norm const tensor input intarrayref normalized shape const tensor weight const tensor bias double eps bool cudnn enable layer norm tensor intarrayref tensor tensor double bool tensor tensor linear const tensor input const tensor weight const tensor bias linear tensor tensor tensor tensor tensor log sigmoid const tensor self log sigmoid tensor tensor tensor log softmax const tensor self t dim optional dtype log softmax tensor t optional tensor tensor margin ranking loss const tensor const tensor const tensor target double margin t reduction margin ranking loss tensor tensor tensor double t tensor tensor masked fill const tensor self const tensor mask scalar value masked fill tensor tensor scalar tensor tensor masked fill const tensor self const tensor mask const tensor value masked fill tensor tensor tensor tensor tensor matmul const tensor self const tensor other matmul tensor tensor tensor tensor max const tensor self intarrayref kernel size intarrayref stride intarrayref padding intarrayref dilation bool ceil mode max tensor intarrayref intarrayref intarrayref intarrayref bool tensor tensor max const tensor self intarrayref kernel size intarrayref stride intarrayref padding intarrayref dilation bool ceil mode max tensor intarrayref intarrayref intarrayref intarrayref bool tensor tensor max const tensor self intarrayref kernel size intarrayref stride intarrayref padding intarrayref dilation bool ceil mode max tensor intarrayref intarrayref intarrayref intarrayref bool tensor std vector meshgrid tensorlist tensors meshgrid tensorlist std vector tensor narrow const tensor self t dim t start t length narrow tensor t t t tensor tensor nll loss const tensor self const tensor target const tensor weight t reduction t ignore index nll loss tensor tensor tensor t t tensor tensor nuclear norm const tensor self bool keepdim nuclear norm tensor bool tensor tensor one hot const tensor self t num classes one hot tensor t tensor tensor ones like const tensor self ones like tensor tensor tensor pairwise distance const tensor const tensor double p double eps bool keepdim pairwise distance tensor tensor double double bool tensor tensor pixel shuffle const tensor self t upscale factor pixel shuffle tensor t tensor tensor pinverse const tensor self double rcond pinverse tensor double tensor tensor reshape const tensor self intarrayref shape reshape tensor intarrayref tensor tensor scatter const tensor self t dim const tensor index const tensor src scatter tensor t tensor tensor tensor tensor scatter const tensor self t dim const tensor index scalar value scatter tensor t tensor scalar tensor tensor scatter add const tensor self t dim const tensor index const tensor src scatter add tensor t tensor tensor tensor tensor selu const tensor self selu tensor tensor tensor selu tensor self selu tensor tensor t size const tensor self t dim size tensor t t tensor softmax const tensor self t dim optional dtype softmax tensor t optional tensor tensor sum to size const tensor self intarrayref size sum to size tensor intarrayref tensor tensor tensordot const tensor self const tensor other intarrayref dims self intarrayref dims other tensordot tensor tensor intarrayref intarrayref tensor tensor to const tensor self const tensoroptions options bool non blocking bool copy optional memory format to tensor tensoroptions bool bool optional tensor tensor to const tensor self device device scalartype dtype bool non blocking bool copy optional memory format to tensor device scalartype bool bool optional tensor tensor to const tensor self scalartype dtype bool non blocking bool copy optional memory format to tensor scalartype bool bool optional tensor tensor to const tensor self const tensor other bool non blocking bool copy optional memory format to tensor tensor bool bool optional tensor tensor triplet margin loss const tensor anchor const tensor positive const tensor negative double margin double p double eps bool swap t reduction triplet margin loss tensor tensor tensor double double double bool t tensor tensor view as const tensor self const tensor other view as tensor tensor tensor tensor where const tensor condition const tensor self const tensor other where tensor tensor tensor tensor tensor zeros like const tensor self zeros like tensor tensor and a few more non leaf factory functions tensor arange scalar end const tensoroptions options arange scalar tensoroptions tensor tensor arange scalar start scalar end const tensoroptions options arange scalar scalar tensoroptions tensor tensor arange scalar start scalar end scalar step const tensoroptions options arange scalar scalar scalar tensoroptions tensor tensor bartlett window t window length const tensoroptions options bartlett window t tensoroptions tensor tensor bartlett window t window length bool periodic const tensoroptions options bartlett window t bool tensoroptions tensor tensor blackman window t window length const tensoroptions options blackman window t tensoroptions tensor tensor blackman window t window length bool periodic const tensoroptions options blackman window t bool tensoroptions tensor tensor empty like const tensor self const tensoroptions options optional memory format empty like tensor tensoroptions optional tensor tensor eye t n const tensoroptions options eye t tensoroptions tensor tensor eye t n t m const tensoroptions options eye t t tensoroptions tensor tensor full intarrayref size scalar fill value const tensoroptions options full intarrayref scalar tensoroptions tensor tensor full like const tensor self scalar fill value const tensoroptions options optional memory format full like tensor scalar tensoroptions optional tensor tensor hamming window t window length const tensoroptions options hamming window t tensoroptions tensor tensor hamming window t window length bool periodic const tensoroptions options hamming window t bool tensoroptions tensor tensor hamming window t window length bool periodic double alpha const tensoroptions options hamming window t bool double tensoroptions tensor tensor hamming window t window length bool periodic double alpha double beta const tensoroptions options hamming window t bool double double tensoroptions tensor tensor hann window t window length const tensoroptions options hann window t tensoroptions tensor tensor hann window t window length bool periodic const tensoroptions options hann window t bool tensoroptions tensor tensor ones intarrayref size const tensoroptions options ones intarrayref tensoroptions tensor tensor ones like const tensor self const tensoroptions options optional memory format ones like tensor tensoroptions optional tensor tensor randperm t n const tensoroptions options randperm t tensoroptions tensor tensor randperm t n generator generator const tensoroptions options randperm t generator tensoroptions tensor tensor zeros like const tensor self optional memory format zeros like tensor optional tensor tensor zeros like const tensor self const tensoroptions options optional memory format zeros like tensor tensoroptions optional tensor | 1 |
306,022 | 9,379,678,519 | IssuesEvent | 2019-04-04 15:24:09 | VeraPrinsen/isomorphisms | https://api.github.com/repos/VeraPrinsen/isomorphisms | closed | Improve algorithm | High Priority | Dit is een voorbeeld van torus144 graph 0 en 6.
<img width="1268" alt="Screenshot 2019-03-29 at 13 46 24" src="https://user-images.githubusercontent.com/33416829/55233516-2c302400-5229-11e9-8bbd-d47d54ea5d04.png">
De tottime is de tijd die het algoritme in dat stuk van de code doorbrengt, de cumtime is de totale tijd van begin tot eind van aanroepen van de methode, maar ook de tijd van andere methoden die hierin worden aangeroepen.
Dus vandaar dat ik denk dat get_colors() nu een bottleneck is, want ongeveer een derde van de tijd is hieraan kwijt. Verder zie ik 4 seconden voor add_edge, wat via de copy() methode komt.
| 1.0 | Improve algorithm - Dit is een voorbeeld van torus144 graph 0 en 6.
<img width="1268" alt="Screenshot 2019-03-29 at 13 46 24" src="https://user-images.githubusercontent.com/33416829/55233516-2c302400-5229-11e9-8bbd-d47d54ea5d04.png">
De tottime is de tijd die het algoritme in dat stuk van de code doorbrengt, de cumtime is de totale tijd van begin tot eind van aanroepen van de methode, maar ook de tijd van andere methoden die hierin worden aangeroepen.
Dus vandaar dat ik denk dat get_colors() nu een bottleneck is, want ongeveer een derde van de tijd is hieraan kwijt. Verder zie ik 4 seconden voor add_edge, wat via de copy() methode komt.
| priority | improve algorithm dit is een voorbeeld van graph en img width alt screenshot at src de tottime is de tijd die het algoritme in dat stuk van de code doorbrengt de cumtime is de totale tijd van begin tot eind van aanroepen van de methode maar ook de tijd van andere methoden die hierin worden aangeroepen dus vandaar dat ik denk dat get colors nu een bottleneck is want ongeveer een derde van de tijd is hieraan kwijt verder zie ik seconden voor add edge wat via de copy methode komt | 1 |
782,783 | 27,506,923,186 | IssuesEvent | 2023-03-06 04:53:47 | xKDR/Survey.jl | https://api.github.com/repos/xKDR/Survey.jl | closed | Constructor to directly read replicate weights data | enhancement help wanted high priority logic heavy | Currently, you need a `SurveyDesign` to make a `ReplicateDesign`; however, you should be able to make it directly. For privacy reason, in many datasets, only the replicate weights are given and not the actual weights. | 1.0 | Constructor to directly read replicate weights data - Currently, you need a `SurveyDesign` to make a `ReplicateDesign`; however, you should be able to make it directly. For privacy reason, in many datasets, only the replicate weights are given and not the actual weights. | priority | constructor to directly read replicate weights data currently you need a surveydesign to make a replicatedesign however you should be able to make it directly for privacy reason in many datasets only the replicate weights are given and not the actual weights | 1 |
233,040 | 7,689,343,362 | IssuesEvent | 2018-05-17 12:27:56 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | opened | Barriers for login/reset password/security question | Group 1 (2018) highPriority | Currently the "maximum amount of tries" barrier for the 3 features (login, reset password and security question) is written in JavaScript. This is not secure and should probably be done in php instead.
* Login #5363
* Reset password #5364
* Security question #5365 | 1.0 | Barriers for login/reset password/security question - Currently the "maximum amount of tries" barrier for the 3 features (login, reset password and security question) is written in JavaScript. This is not secure and should probably be done in php instead.
* Login #5363
* Reset password #5364
* Security question #5365 | priority | barriers for login reset password security question currently the maximum amount of tries barrier for the features login reset password and security question is written in javascript this is not secure and should probably be done in php instead login reset password security question | 1 |
244,576 | 7,876,797,213 | IssuesEvent | 2018-06-26 03:16:42 | heptio/sonobuoy | https://api.github.com/repos/heptio/sonobuoy | closed | conformance test fail | p0 - Higher Priority | **Is this a BUG REPORT or FEATURE REQUEST?**:
bug.
**What happened**:
My env: kubernetes v1.10.2,flannel 0.10.
use kubeadm install kubernetes cluster,no more addon or apps.
command:
`curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f - `
when the test finish, the log of sonobuoy shows "error running plugins: timed out waiting for plugins, shutting down HTTP server".And I found there was a pod named e2e-XXXX still running, the tarball did not contain plugins.
After it, I followed https://scanner.heptio.com/ and still got fail.
#kubectl -n heptio-sonobuoy logs -f sonobuoy -c forwarder
```kubectl -n heptio-sonobuoy logs -f sonobuoy -c forwarder
time="2018-05-18T06:00:56Z" level=info msg="forwarder information" Scanner ID=04b9547158ca910b5f91453e80df1ae1 Scanner URL="https://scanner.heptio.com"
2018/05/18 06:01:26 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T06:01:26Z" level=info msg="failed to send init message" error="non success response code: 502"
time="2018-05-18T06:01:26Z" level=info msg="waiting for a done file to appear..." looking for=/tmp/sonobuoy/done
time="2018-05-18T07:30:57Z" level=info msg="done file detected"
2018/05/18 07:31:27 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T07:31:27Z" level=info msg="failed to delete init file" error="non success response code: 502"
time="2018-05-18T07:31:27Z" level=info msg="processing the done file contents"
time="2018-05-18T07:31:27Z" level=info msg="processing result file" result file=/tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz
2018/05/18 07:31:57 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T07:31:57Z" level=info msg="failed to POST data" endpoint="http://127.0.0.1:9898/ingest/" error="non success response code: 502" result file=/tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz
```
#kubectl -n heptio-sonobuoy logs -f sonobuoy -c kube-sonobuoy
```kubectl -n heptio-sonobuoy logs -f sonobuoy -c kube-sonobuoy
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="unknown template type" filename=..2018_05_18_06_00_43.406360815
time="2018-05-18T06:00:54Z" level=info msg="unknown template type" filename=..data
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"
time="2018-05-18T06:00:54Z" level=info msg="Loading plugin driver Job"
time="2018-05-18T06:00:54Z" level=info msg="Filtering namespaces based on the following regex:.*|heptio-sonobuoy"
time="2018-05-18T06:00:54Z" level=info msg="Namespace default Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-daemonsets-m52jl Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-prestop-mkpjt Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-proxy-wzqg6 Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-replication-controller-jwbck Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace heptio-sonobuoy Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace kube-public Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace kube-system Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Starting server Expected Results: [{ e2e}]"
time="2018-05-18T06:00:54Z" level=info msg="Running (e2e) plugin"
time="2018-05-18T06:00:54Z" level=info msg="Listening for incoming results on 0.0.0.0:8080\n"
time="2018-05-18T07:30:54Z" level=error msg="error running plugins: timed out waiting for plugins, shutting down HTTP server"
time="2018-05-18T07:30:54Z" level=info msg="Running non-ns query"
time="2018-05-18T07:30:54Z" level=info msg="Collecting Node Configuration and Health..."
time="2018-05-18T07:30:54Z" level=info msg="Creating host results for ip-10-27-185-24.eu-central-1.compute.internal under /tmp/sonobuoy/a99b6b82-3ad5-4033-a01c-011cf20896ab/hosts/ip-10-27-185-24.eu-central-1.compute.internal\n"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get configz endpoint for node ip-10-27-185-24.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get healthz endpoint for node ip-10-27-185-24.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=info msg="Creating host results for ip-10-27-185-48.eu-central-1.compute.internal under /tmp/sonobuoy/a99b6b82-3ad5-4033-a01c-011cf20896ab/hosts/ip-10-27-185-48.eu-central-1.compute.internal\n"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get configz endpoint for node ip-10-27-185-48.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get healthz endpoint for node ip-10-27-185-48.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (default)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-daemonsets-m52jl)"
o msg="Results available at /tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz"time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-prestop-mkpjt)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-proxy-wzqg6)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-replication-controller-jwbck)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (heptio-sonobuoy)"
time="2018-05-18T07:30:54Z" level=info msg="Collecting Pod Logs..."
time="2018-05-18T07:30:55Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)"
time="2018-05-18T07:30:57Z" level=info msg="Running ns query (kube-public)"
time="2018-05-18T07:30:57Z" level=info msg="Running ns query (kube-system)"
time="2018-05-18T07:30:57Z" level=inf
```
**What you expected to happen**:
test pass.
**How to reproduce it (as minimally and precisely as possible)**:
kubernetes v1.10.2,flannel 0.10.
`curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f - `
**Anything else we need to know?**:
**Environment**:
- Sonobuoy tarball (which contains * below)
- Kubernetes version (use `kubectl version`): v1.10.2
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): centos 7.4
- Kernel (e.g. `uname -a`):Linux ip-10-27-185-48.eu-central-1.compute.internal 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
| 1.0 | conformance test fail - **Is this a BUG REPORT or FEATURE REQUEST?**:
bug.
**What happened**:
My env: kubernetes v1.10.2,flannel 0.10.
use kubeadm install kubernetes cluster,no more addon or apps.
command:
`curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f - `
when the test finish, the log of sonobuoy shows "error running plugins: timed out waiting for plugins, shutting down HTTP server".And I found there was a pod named e2e-XXXX still running, the tarball did not contain plugins.
After it, I followed https://scanner.heptio.com/ and still got fail.
#kubectl -n heptio-sonobuoy logs -f sonobuoy -c forwarder
```kubectl -n heptio-sonobuoy logs -f sonobuoy -c forwarder
time="2018-05-18T06:00:56Z" level=info msg="forwarder information" Scanner ID=04b9547158ca910b5f91453e80df1ae1 Scanner URL="https://scanner.heptio.com"
2018/05/18 06:01:26 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T06:01:26Z" level=info msg="failed to send init message" error="non success response code: 502"
time="2018-05-18T06:01:26Z" level=info msg="waiting for a done file to appear..." looking for=/tmp/sonobuoy/done
time="2018-05-18T07:30:57Z" level=info msg="done file detected"
2018/05/18 07:31:27 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T07:31:27Z" level=info msg="failed to delete init file" error="non success response code: 502"
time="2018-05-18T07:31:27Z" level=info msg="processing the done file contents"
time="2018-05-18T07:31:27Z" level=info msg="processing result file" result file=/tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz
2018/05/18 07:31:57 http: proxy error: dial tcp: i/o timeout
time="2018-05-18T07:31:57Z" level=info msg="failed to POST data" endpoint="http://127.0.0.1:9898/ingest/" error="non success response code: 502" result file=/tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz
```
#kubectl -n heptio-sonobuoy logs -f sonobuoy -c kube-sonobuoy
```kubectl -n heptio-sonobuoy logs -f sonobuoy -c kube-sonobuoy
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="unknown template type" filename=..2018_05_18_06_00_43.406360815
time="2018-05-18T06:00:54Z" level=info msg="unknown template type" filename=..data
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2018-05-18T06:00:54Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"
time="2018-05-18T06:00:54Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"
time="2018-05-18T06:00:54Z" level=info msg="Loading plugin driver Job"
time="2018-05-18T06:00:54Z" level=info msg="Filtering namespaces based on the following regex:.*|heptio-sonobuoy"
time="2018-05-18T06:00:54Z" level=info msg="Namespace default Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-daemonsets-m52jl Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-prestop-mkpjt Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-proxy-wzqg6 Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace e2e-tests-replication-controller-jwbck Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace heptio-sonobuoy Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace kube-public Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Namespace kube-system Matched=true"
time="2018-05-18T06:00:54Z" level=info msg="Starting server Expected Results: [{ e2e}]"
time="2018-05-18T06:00:54Z" level=info msg="Running (e2e) plugin"
time="2018-05-18T06:00:54Z" level=info msg="Listening for incoming results on 0.0.0.0:8080\n"
time="2018-05-18T07:30:54Z" level=error msg="error running plugins: timed out waiting for plugins, shutting down HTTP server"
time="2018-05-18T07:30:54Z" level=info msg="Running non-ns query"
time="2018-05-18T07:30:54Z" level=info msg="Collecting Node Configuration and Health..."
time="2018-05-18T07:30:54Z" level=info msg="Creating host results for ip-10-27-185-24.eu-central-1.compute.internal under /tmp/sonobuoy/a99b6b82-3ad5-4033-a01c-011cf20896ab/hosts/ip-10-27-185-24.eu-central-1.compute.internal\n"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get configz endpoint for node ip-10-27-185-24.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get healthz endpoint for node ip-10-27-185-24.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=info msg="Creating host results for ip-10-27-185-48.eu-central-1.compute.internal under /tmp/sonobuoy/a99b6b82-3ad5-4033-a01c-011cf20896ab/hosts/ip-10-27-185-48.eu-central-1.compute.internal\n"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get configz endpoint for node ip-10-27-185-48.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=warning msg="Could not get healthz endpoint for node ip-10-27-185-48.eu-central-1.compute.internal: the server could not find the requested resource"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (default)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-daemonsets-m52jl)"
o msg="Results available at /tmp/sonobuoy/201805180600_sonobuoy_a99b6b82-3ad5-4033-a01c-011cf20896ab.tar.gz"time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-prestop-mkpjt)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-proxy-wzqg6)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (e2e-tests-replication-controller-jwbck)"
time="2018-05-18T07:30:54Z" level=info msg="Running ns query (heptio-sonobuoy)"
time="2018-05-18T07:30:54Z" level=info msg="Collecting Pod Logs..."
time="2018-05-18T07:30:55Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)"
time="2018-05-18T07:30:57Z" level=info msg="Running ns query (kube-public)"
time="2018-05-18T07:30:57Z" level=info msg="Running ns query (kube-system)"
time="2018-05-18T07:30:57Z" level=inf
```
**What you expected to happen**:
test pass.
**How to reproduce it (as minimally and precisely as possible)**:
kubernetes v1.10.2,flannel 0.10.
`curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f - `
**Anything else we need to know?**:
**Environment**:
- Sonobuoy tarball (which contains * below)
- Kubernetes version (use `kubectl version`): v1.10.2
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): centos 7.4
- Kernel (e.g. `uname -a`):Linux ip-10-27-185-48.eu-central-1.compute.internal 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
| priority | conformance test fail is this a bug report or feature request bug what happened my env kubernetes flannel use kubeadm install kubernetes cluster no more addon or apps command curl l kubectl apply f when the test finish the log of sonobuoy shows error running plugins timed out waiting for plugins shutting down http server and i found there was a pod named xxxx still running the tarball did not contain plugins after it i followed and still got fail kubectl n heptio sonobuoy logs f sonobuoy c forwarder kubectl n heptio sonobuoy logs f sonobuoy c forwarder time level info msg forwarder information scanner id scanner url http proxy error dial tcp i o timeout time level info msg failed to send init message error non success response code time level info msg waiting for a done file to appear looking for tmp sonobuoy done time level info msg done file detected http proxy error dial tcp i o timeout time level info msg failed to delete init file error non success response code time level info msg processing the done file contents time level info msg processing result file result file tmp sonobuoy sonobuoy tar gz http proxy error dial tcp i o timeout time level info msg failed to post data endpoint error non success response code result file tmp sonobuoy sonobuoy tar gz kubectl n heptio sonobuoy logs f sonobuoy c kube sonobuoy kubectl n heptio sonobuoy logs f sonobuoy c kube sonobuoy time level info msg scanning plugins in plugins d pwd time level info msg unknown template type filename time level info msg unknown template type filename data time level info msg scanning plugins in etc sonobuoy plugins d pwd time level info msg directory etc sonobuoy plugins d does not exist time level info msg scanning plugins in sonobuoy plugins d pwd time level info msg directory sonobuoy plugins d does not exist time level info msg loading plugin driver job time level info msg filtering namespaces based on the following regex heptio sonobuoy time level info msg namespace default matched true time level info msg namespace tests daemonsets matched true time level info msg namespace tests prestop mkpjt matched true time level info msg namespace tests proxy matched true time level info msg namespace tests replication controller jwbck matched true time level info msg namespace heptio sonobuoy matched true time level info msg namespace kube public matched true time level info msg namespace kube system matched true time level info msg starting server expected results time level info msg running plugin time level info msg listening for incoming results on n time level error msg error running plugins timed out waiting for plugins shutting down http server time level info msg running non ns query time level info msg collecting node configuration and health time level info msg creating host results for ip eu central compute internal under tmp sonobuoy hosts ip eu central compute internal n time level warning msg could not get configz endpoint for node ip eu central compute internal the server could not find the requested resource time level warning msg could not get healthz endpoint for node ip eu central compute internal the server could not find the requested resource time level info msg creating host results for ip eu central compute internal under tmp sonobuoy hosts ip eu central compute internal n time level warning msg could not get configz endpoint for node ip eu central compute internal the server could not find the requested resource time level warning msg could not get healthz endpoint for node ip eu central compute internal the server could not find the requested resource time level info msg running ns query default time level info msg running ns query tests daemonsets o msg results available at tmp sonobuoy sonobuoy tar gz time level info msg running ns query tests prestop mkpjt time level info msg running ns query tests proxy time level info msg running ns query tests replication controller jwbck time level info msg running ns query heptio sonobuoy time level info msg collecting pod logs time level error msg error querying podpresets the server could not find the requested resource get podpresets settings io time level info msg running ns query kube public time level info msg running ns query kube system time level inf what you expected to happen test pass how to reproduce it as minimally and precisely as possible kubernetes flannel curl l kubectl apply f anything else we need to know environment sonobuoy tarball which contains below kubernetes version use kubectl version cloud provider or hardware configuration aws os e g from etc os release centos kernel e g uname a linux ip eu central compute internal smp thu jan utc gnu linux | 1 |
507,987 | 14,686,298,728 | IssuesEvent | 2021-01-01 14:15:34 | robotframework/PythonLibCore | https://api.github.com/repos/robotframework/PythonLibCore | closed | robotframework-robotlibcore or robotframework-pythonlibcore | bug priority: high | Hi,
is it robot or python libcore?
Install instructions here on github says pip install robotframework-robotlibcore
but on pypy.org there is only pip install robotframework-pythonlibcore | 1.0 | robotframework-robotlibcore or robotframework-pythonlibcore - Hi,
is it robot or python libcore?
Install instructions here on github says pip install robotframework-robotlibcore
but on pypy.org there is only pip install robotframework-pythonlibcore | priority | robotframework robotlibcore or robotframework pythonlibcore hi is it robot or python libcore install instructions here on github says pip install robotframework robotlibcore but on pypy org there is only pip install robotframework pythonlibcore | 1 |
526,073 | 15,279,503,865 | IssuesEvent | 2021-02-23 04:09:25 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | `Document This` code action is suggested for irrelevant positions | Area/LanguageServer Priority/High SwanLakeDump Team/Tooling Type/Bug | **Description:**
Consider the attached capture, where the code action should not suggest.
<img width="349" alt="Screenshot 2021-02-03 at 10 09 59" src="https://user-images.githubusercontent.com/1329674/106699360-b9ecaf80-6608-11eb-83fd-f0e2f2904a8b.png">
| 1.0 | `Document This` code action is suggested for irrelevant positions - **Description:**
Consider the attached capture, where the code action should not suggest.
<img width="349" alt="Screenshot 2021-02-03 at 10 09 59" src="https://user-images.githubusercontent.com/1329674/106699360-b9ecaf80-6608-11eb-83fd-f0e2f2904a8b.png">
| priority | document this code action is suggested for irrelevant positions description consider the attached capture where the code action should not suggest img width alt screenshot at src | 1 |
319,776 | 9,754,632,618 | IssuesEvent | 2019-06-04 12:08:55 | dominikbraun/foodunit | https://api.github.com/repos/dominikbraun/foodunit | closed | Preise der Positionen im Warenkorb anzeigen | enhancement high priority | Der Preis einer einzelnen Position sollte im Warenkorb angezeigt werden, um die Einzelpreise besser ersichtlich zu machen. | 1.0 | Preise der Positionen im Warenkorb anzeigen - Der Preis einer einzelnen Position sollte im Warenkorb angezeigt werden, um die Einzelpreise besser ersichtlich zu machen. | priority | preise der positionen im warenkorb anzeigen der preis einer einzelnen position sollte im warenkorb angezeigt werden um die einzelpreise besser ersichtlich zu machen | 1 |
752,208 | 26,276,680,825 | IssuesEvent | 2023-01-06 23:02:46 | BYU-ODH/yvideo-client | https://api.github.com/repos/BYU-ODH/yvideo-client | closed | Switch transcript translation to `LibreTranslate` instead of dictionary | enhancement High Priority | Instead of clicking on a `<span>` tag, I suggest that we allow the user to select any text in the transcript and `onmouseup`, send the selected text to the new `libretranslate.yvideodev.byu.edu` instance and display the response in the pane at the bottom of the screen.
For starters, we could just leave the span tags, and just start using `libretranslate` instead of the dictionary, and then worry about changing the UI later. | 1.0 | Switch transcript translation to `LibreTranslate` instead of dictionary - Instead of clicking on a `<span>` tag, I suggest that we allow the user to select any text in the transcript and `onmouseup`, send the selected text to the new `libretranslate.yvideodev.byu.edu` instance and display the response in the pane at the bottom of the screen.
For starters, we could just leave the span tags, and just start using `libretranslate` instead of the dictionary, and then worry about changing the UI later. | priority | switch transcript translation to libretranslate instead of dictionary instead of clicking on a tag i suggest that we allow the user to select any text in the transcript and onmouseup send the selected text to the new libretranslate yvideodev byu edu instance and display the response in the pane at the bottom of the screen for starters we could just leave the span tags and just start using libretranslate instead of the dictionary and then worry about changing the ui later | 1 |
357,151 | 10,602,957,218 | IssuesEvent | 2019-10-10 15:06:39 | rstudio/shiny | https://api.github.com/repos/rstudio/shiny | closed | plotOutput()'s double-click input value isn't set in Firefox | Priority: High Type: Bug :bug: Type: Regression | To reproduce: Try double-clicking in Firefox with 093-plot-interaction-basic. The input value tied to double-click events is not set.
This issue is a regression from shiny CRAN (v.1.3.2) and the problem was introduced by upgrading jQuery in https://github.com/rstudio/shiny/commit/9a1f7cba6877230dbeb758f32c490f986103cea6. It looks as though the `pending_e.offsetX/Y` values here no longer contain the correct information
https://github.com/rstudio/shiny/blob/02f7a4fdc9976b27e4a522c88c9d483421c46fec/srcjs/output_binding_image.js#L706-L711
Fortunately, I think we can fix the problem by using the more official `pageX`/`pageY` instead to determine if the second click is "too far away" | 1.0 | plotOutput()'s double-click input value isn't set in Firefox - To reproduce: Try double-clicking in Firefox with 093-plot-interaction-basic. The input value tied to double-click events is not set.
This issue is a regression from shiny CRAN (v.1.3.2) and the problem was introduced by upgrading jQuery in https://github.com/rstudio/shiny/commit/9a1f7cba6877230dbeb758f32c490f986103cea6. It looks as though the `pending_e.offsetX/Y` values here no longer contain the correct information
https://github.com/rstudio/shiny/blob/02f7a4fdc9976b27e4a522c88c9d483421c46fec/srcjs/output_binding_image.js#L706-L711
Fortunately, I think we can fix the problem by using the more official `pageX`/`pageY` instead to determine if the second click is "too far away" | priority | plotoutput s double click input value isn t set in firefox to reproduce try double clicking in firefox with plot interaction basic the input value tied to double click events is not set this issue is a regression from shiny cran v and the problem was introduced by upgrading jquery in it looks as though the pending e offsetx y values here no longer contain the correct information fortunately i think we can fix the problem by using the more official pagex pagey instead to determine if the second click is too far away | 1 |
516,444 | 14,982,332,828 | IssuesEvent | 2021-01-28 15:52:28 | zowe/api-layer | https://api.github.com/repos/zowe/api-layer | closed | Upgrade JDK to the 11 in the wash.zowe.org for the Sonarcloud run | 20PI4 Priority: High backlog enhancement squad | **Is your feature request related to a problem? Please describe.**
We use the Sonarcloud for the static analysis and the testing coverage of the code we work with. The moment when JDK 8 will be removed altogether from support is nearing as it is already not supported. We need to adapt our builds to still properly use the Sonarcloud.
**Describe the solution you'd like**
Add JDK 11 in the image we build on. This probably entails creation of new Docker image.
**Describe alternatives you've considered**
Let SonarCloud analyze the code without us pushing the results - This solution isn't available for compiled languages.
Create new separate pipeline in Jenkins for the analysis which will run on different image. - Adds a lot of complexity and prolongs the run time, but it is a viable option if needed.
| 1.0 | Upgrade JDK to the 11 in the wash.zowe.org for the Sonarcloud run - **Is your feature request related to a problem? Please describe.**
We use the Sonarcloud for the static analysis and the testing coverage of the code we work with. The moment when JDK 8 will be removed altogether from support is nearing as it is already not supported. We need to adapt our builds to still properly use the Sonarcloud.
**Describe the solution you'd like**
Add JDK 11 in the image we build on. This probably entails creation of new Docker image.
**Describe alternatives you've considered**
Let SonarCloud analyze the code without us pushing the results - This solution isn't available for compiled languages.
Create new separate pipeline in Jenkins for the analysis which will run on different image. - Adds a lot of complexity and prolongs the run time, but it is a viable option if needed.
| priority | upgrade jdk to the in the wash zowe org for the sonarcloud run is your feature request related to a problem please describe we use the sonarcloud for the static analysis and the testing coverage of the code we work with the moment when jdk will be removed altogether from support is nearing as it is already not supported we need to adapt our builds to still properly use the sonarcloud describe the solution you d like add jdk in the image we build on this probably entails creation of new docker image describe alternatives you ve considered let sonarcloud analyze the code without us pushing the results this solution isn t available for compiled languages create new separate pipeline in jenkins for the analysis which will run on different image adds a lot of complexity and prolongs the run time but it is a viable option if needed | 1 |
284,628 | 8,744,615,014 | IssuesEvent | 2018-12-12 22:52:46 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | Make temp props file used by MSBuild helper file persistent | complex: low priority: high stage: review type: bug type: feature | This came out of slack discussion. There is currently a temporary props file used to force MT/MD into projects during build when the MSBuild helper is used. It would be preferable if the file were generated in the build directory and left behind for packaging, both for debugging, and so that the build is reproducible without Conan.
Also related to this request:
https://github.com/conan-io/conan/issues/4073
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
| 1.0 | Make temp props file used by MSBuild helper file persistent - This came out of slack discussion. There is currently a temporary props file used to force MT/MD into projects during build when the MSBuild helper is used. It would be preferable if the file were generated in the build directory and left behind for packaging, both for debugging, and so that the build is reproducible without Conan.
Also related to this request:
https://github.com/conan-io/conan/issues/4073
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
| priority | make temp props file used by msbuild helper file persistent this came out of slack discussion there is currently a temporary props file used to force mt md into projects during build when the msbuild helper is used it would be preferable if the file were generated in the build directory and left behind for packaging both for debugging and so that the build is reproducible without conan also related to this request i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion | 1 |
624,751 | 19,706,251,391 | IssuesEvent | 2022-01-12 22:25:49 | Cotalker/documentation | https://api.github.com/repos/Cotalker/documentation | closed | Bug report: Sign in Report | Bug report Bug high priority | ### Affected system
Cotalker Web Application
### Affected system (other)
_No response_
### Affected environment
Production
### Affected environment (other)
_No response_
### App version
17.6.1
### Details
i can't sign in with google.
### Steps to reproduce
1. open web.cotalker.com
2. sign in with google
### Expected result
says "Could not connect with google"
### Additional data
_No response_ | 1.0 | Bug report: Sign in Report - ### Affected system
Cotalker Web Application
### Affected system (other)
_No response_
### Affected environment
Production
### Affected environment (other)
_No response_
### App version
17.6.1
### Details
i can't sign in with google.
### Steps to reproduce
1. open web.cotalker.com
2. sign in with google
### Expected result
says "Could not connect with google"
### Additional data
_No response_ | priority | bug report sign in report affected system cotalker web application affected system other no response affected environment production affected environment other no response app version details i can t sign in with google steps to reproduce open web cotalker com sign in with google expected result says could not connect with google additional data no response | 1 |
235,012 | 7,733,849,740 | IssuesEvent | 2018-05-26 16:45:12 | Polymer/lit-html | https://api.github.com/repos/Polymer/lit-html | closed | Tag npm releases | Priority: High Status: Available Type: Maintenance | Hi 🙂
I got somewhat confused about the versioning of this library.
The npm version right now is 0.7.1, while the latest tag on github is 0.6.0.
Is there any way we could ensure, that released npm versions are tagged, so that we can figure out from commits what changed. (Changelog would be even better, but since this is not a stable library, maybe not quite there yet.)
I'm under the impression google even has some kind of bot that can do the version release automation for you, isn't there?
Although this could as well just be some npm script. | 1.0 | Tag npm releases - Hi 🙂
I got somewhat confused about the versioning of this library.
The npm version right now is 0.7.1, while the latest tag on github is 0.6.0.
Is there any way we could ensure, that released npm versions are tagged, so that we can figure out from commits what changed. (Changelog would be even better, but since this is not a stable library, maybe not quite there yet.)
I'm under the impression google even has some kind of bot that can do the version release automation for you, isn't there?
Although this could as well just be some npm script. | priority | tag npm releases hi 🙂 i got somewhat confused about the versioning of this library the npm version right now is while the latest tag on github is is there any way we could ensure that released npm versions are tagged so that we can figure out from commits what changed changelog would be even better but since this is not a stable library maybe not quite there yet i m under the impression google even has some kind of bot that can do the version release automation for you isn t there although this could as well just be some npm script | 1 |
20,385 | 2,622,846,062 | IssuesEvent | 2015-03-04 08:02:39 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | Check interval is difficult to control with slider in 3.0 | auto-migrated Priority-High Type-Enhancement | ```
What steps will reproduce the problem?
I wanted to set the check interval to 1 day. It seems to be possible to move
the slider only with the mouse (not with the keyboard) and it was quite
difficult to do so.
What is the expected output? What do you see instead?
Either allow moving the slider with the keyboard (maybe it is possible but I
don't know how) or allow optional entry of interval in minutes in a numerical
box as before 3.0.
What version of the Chrome are you using? On what operating system?
6.0.453.1 Windows XP Pro SP3
Please provide any additional information below.
```
Original issue reported on code.google.com by `gkaemp...@gmail.com` on 4 Jul 2010 at 11:01 | 1.0 | Check interval is difficult to control with slider in 3.0 - ```
What steps will reproduce the problem?
I wanted to set the check interval to 1 day. It seems to be possible to move
the slider only with the mouse (not with the keyboard) and it was quite
difficult to do so.
What is the expected output? What do you see instead?
Either allow moving the slider with the keyboard (maybe it is possible but I
don't know how) or allow optional entry of interval in minutes in a numerical
box as before 3.0.
What version of the Chrome are you using? On what operating system?
6.0.453.1 Windows XP Pro SP3
Please provide any additional information below.
```
Original issue reported on code.google.com by `gkaemp...@gmail.com` on 4 Jul 2010 at 11:01 | priority | check interval is difficult to control with slider in what steps will reproduce the problem i wanted to set the check interval to day it seems to be possible to move the slider only with the mouse not with the keyboard and it was quite difficult to do so what is the expected output what do you see instead either allow moving the slider with the keyboard maybe it is possible but i don t know how or allow optional entry of interval in minutes in a numerical box as before what version of the chrome are you using on what operating system windows xp pro please provide any additional information below original issue reported on code google com by gkaemp gmail com on jul at | 1 |
699,844 | 24,034,234,547 | IssuesEvent | 2022-09-15 17:34:56 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | Investigate Propfind deep benchmark | OCIS-Fastlane Topic:Performance Status:Stale Priority:p2-high | ## Description
We have a slight performance drop (~25%) between ocis 1.13 and ocis 1.14.
See #2671
Investigation is needed. | 1.0 | Investigate Propfind deep benchmark - ## Description
We have a slight performance drop (~25%) between ocis 1.13 and ocis 1.14.
See #2671
Investigation is needed. | priority | investigate propfind deep benchmark description we have a slight performance drop between ocis and ocis see investigation is needed | 1 |
174,743 | 6,542,818,974 | IssuesEvent | 2017-09-02 13:20:18 | kh-live/khlive | https://api.github.com/repos/kh-live/khlive | opened | Find which NTP servers are allowed on Telkom mobile | bug Department (General) priority (high) | Telkom Mobile blocks default NTP servers. The clock then can't synchronise. And it prevents the scheduler from working. Alternatively use the API to resynch the slave server's clock when the master detects a problem. | 1.0 | Find which NTP servers are allowed on Telkom mobile - Telkom Mobile blocks default NTP servers. The clock then can't synchronise. And it prevents the scheduler from working. Alternatively use the API to resynch the slave server's clock when the master detects a problem. | priority | find which ntp servers are allowed on telkom mobile telkom mobile blocks default ntp servers the clock then can t synchronise and it prevents the scheduler from working alternatively use the api to resynch the slave server s clock when the master detects a problem | 1 |
625,571 | 19,753,389,616 | IssuesEvent | 2022-01-15 10:07:51 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | InputNG: darktable crashs with X Touch Mini device on start if arbitrary shortcuts are defined | priority: high bug: pending | latest changes in InputNG introduced following issue:
darktable crashes on start if shortcuts are defined (even no midi shortcuts) and the XTocuh mini is connected:
```
(darktable:32033): Gtk-CRITICAL **: 14:22:06.924: gtk_window_add_accel_group: assertion 'GTK_IS_WINDOW (window)' failed
[midi_open_devices] opened midi device 'X-TOUCH MINI' via 'CoreMIDI' as midi0
libdarktable.dylib was compiled with optimization - stepping may behave oddly; variables may not be available.
Process 32033 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x40)
frame #0: 0x000000010096f47a libdarktable.dylib`_process_shortcut [inlined] _shortcut_match(f=0x00007ff7bfefc4f0, fb_log=0x00007ff7bfefc528) at accelerators.c:2734:14 [opt]
2731
2732 static gboolean _shortcut_match(dt_shortcut_t *f, gchar **fb_log)
2733 {
-> 2734 f->views = darktable.view_manager->current_view->view(darktable.view_manager->current_view);
2735 gpointer v = GINT_TO_POINTER(f->views);
2736
2737 GSequenceIter *existing = g_sequence_search(darktable.control->shortcuts, f, _shortcut_compare_func, v);
Target 0: (darktable) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x40)
* frame #0: 0x000000010096f47a libdarktable.dylib`_process_shortcut [inlined] _shortcut_match(f=0x00007ff7bfefc4f0, fb_log=0x00007ff7bfefc528) at accelerators.c:2734:14 [opt]
frame #1: 0x000000010096f472 libdarktable.dylib`_process_shortcut(move_size=NaN) at accelerators.c:2961:6 [opt]
frame #2: 0x000000010096f3d6 libdarktable.dylib`dt_shortcut_move(id=<unavailable>, time=0, move=1, size=NaN) at accelerators.c:3100:20 [opt]
frame #3: 0x000000011812ae4f libmidi.so`update_with_move(midi=0x0000600003923b50, timestamp=0, controller=1, move=<unavailable>) at midi.c:251:24 [opt]
frame #4: 0x000000011812b8ed libmidi.so`_timeout_midi_update(user_data=<unavailable>) at midi.c:516:46 [opt]
frame #5: 0x0000000100518509 libglib-2.0.0.dylib`g_timeout_dispatch + 20
frame #6: 0x000000010051b58d libglib-2.0.0.dylib`g_main_context_dispatch + 257
frame #7: 0x00007ff8097f6cb7 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
frame #8: 0x00007ff8097f6b54 CoreFoundation`__CFRunLoopDoObservers + 543
frame #9: 0x00007ff8097f6312 CoreFoundation`__CFRunLoopRun + 1652
frame #10: 0x00007ff8097f55dd CoreFoundation`CFRunLoopRunSpecific + 563
frame #11: 0x00000001191c7038 libSDL2-2.0.0.dylib`PLATFORM_hid_enumerate + 81
frame #12: 0x00000001191c8262 libSDL2-2.0.0.dylib`SDL_hid_enumerate_REAL + 44
frame #13: 0x00000001192719d5 libSDL2-2.0.0.dylib`HIDAPI_UpdateDeviceList + 72
frame #14: 0x0000000119271f2e libSDL2-2.0.0.dylib`HIDAPI_JoystickDetect + 42
frame #15: 0x000000011927195f libSDL2-2.0.0.dylib`HIDAPI_JoystickInit + 101
frame #16: 0x00000001191cbd31 libSDL2-2.0.0.dylib`SDL_JoystickInit + 93
frame #17: 0x00000001191a6dd8 libSDL2-2.0.0.dylib`SDL_InitSubSystem_REAL + 265
frame #18: 0x000000011900f50b libgamepad.so`gamepad_open_devices(self=0x00000001140553f0) at gamepad.c:251:6 [opt]
frame #19: 0x00000001009b2e13 libdarktable.dylib`dt_lib_init_module(m=0x00000001140553f0) at lib.c:757:5 [opt]
frame #20: 0x0000000100876eac libdarktable.dylib`dt_module_load_modules(subdir=<unavailable>, module_size=472, load_module_so=(libdarktable.dylib`dt_lib_load_module at lib.c:580), init_module=<unavailable>, sort_modules=(libdarktable.dylib`dt_lib_sort_plugins at lib.c:559)) at module.c:59:21 [opt]
frame #21: 0x00000001009b279c libdarktable.dylib`dt_lib_init(lib=<unavailable>) at lib.c:1045:28 [opt]
frame #22: 0x00000001007fc876 libdarktable.dylib`dt_init(argc=<unavailable>, argv=<unavailable>, init_gui=<unavailable>, load_data=1, L=0x0000000000000000) at darktable.c:1121:5 [opt]
frame #23: 0x000000010000ff3a darktable`main(argc=<unavailable>, argv=<unavailable>) at main.c:92:6 [opt]
frame #24: 0x00000001000214fe dyld`start + 462
```
**To Reproduce**
_Please provide detailed steps to reproduce the behaviour, for example:_
1. delete all shortcutsrc* files in config directory, connect XTouch mini
2. run darktable --> everything is fine
3. close darktable, run again --> crash occurs
4. disconnect Midi device
5. run darktable --> everything is fine
**Which commit introduced the error**
bisecting in 3.8.x branch gave
```
95d625dda15b39cf8c4f0a22c1724055247dd8b0 is the first bad commit
commit 95d625dda15b39cf8c4f0a22c1724055247dd8b0
Author: phweyland <philippe.weyland@laposte.net>
Date: Fri Dec 31 13:50:22 2021 -0300
tags presets: append preset's tags instead of replacing.
fixes #10741
src/libs/tagging.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
```
bisecting in master gave:
```
576828122e8bf88bc24069d6a70cf42bdc423703 is the first bad commit
commit 576828122e8bf88bc24069d6a70cf42bdc423703
Author: Diederik ter Rahe <dterrahe@yahoo.com>
Date: Mon Dec 20 01:11:37 2021 +0100
make notebook pages selected image[s] lib shortcuttable
src/libs/image.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
```
**Platform**
* darktable version :darktable-3.8.0+50~g9716a5a5f.dmg
* OS : OSx
| 1.0 | InputNG: darktable crashs with X Touch Mini device on start if arbitrary shortcuts are defined - latest changes in InputNG introduced following issue:
darktable crashes on start if shortcuts are defined (even no midi shortcuts) and the XTocuh mini is connected:
```
(darktable:32033): Gtk-CRITICAL **: 14:22:06.924: gtk_window_add_accel_group: assertion 'GTK_IS_WINDOW (window)' failed
[midi_open_devices] opened midi device 'X-TOUCH MINI' via 'CoreMIDI' as midi0
libdarktable.dylib was compiled with optimization - stepping may behave oddly; variables may not be available.
Process 32033 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x40)
frame #0: 0x000000010096f47a libdarktable.dylib`_process_shortcut [inlined] _shortcut_match(f=0x00007ff7bfefc4f0, fb_log=0x00007ff7bfefc528) at accelerators.c:2734:14 [opt]
2731
2732 static gboolean _shortcut_match(dt_shortcut_t *f, gchar **fb_log)
2733 {
-> 2734 f->views = darktable.view_manager->current_view->view(darktable.view_manager->current_view);
2735 gpointer v = GINT_TO_POINTER(f->views);
2736
2737 GSequenceIter *existing = g_sequence_search(darktable.control->shortcuts, f, _shortcut_compare_func, v);
Target 0: (darktable) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x40)
* frame #0: 0x000000010096f47a libdarktable.dylib`_process_shortcut [inlined] _shortcut_match(f=0x00007ff7bfefc4f0, fb_log=0x00007ff7bfefc528) at accelerators.c:2734:14 [opt]
frame #1: 0x000000010096f472 libdarktable.dylib`_process_shortcut(move_size=NaN) at accelerators.c:2961:6 [opt]
frame #2: 0x000000010096f3d6 libdarktable.dylib`dt_shortcut_move(id=<unavailable>, time=0, move=1, size=NaN) at accelerators.c:3100:20 [opt]
frame #3: 0x000000011812ae4f libmidi.so`update_with_move(midi=0x0000600003923b50, timestamp=0, controller=1, move=<unavailable>) at midi.c:251:24 [opt]
frame #4: 0x000000011812b8ed libmidi.so`_timeout_midi_update(user_data=<unavailable>) at midi.c:516:46 [opt]
frame #5: 0x0000000100518509 libglib-2.0.0.dylib`g_timeout_dispatch + 20
frame #6: 0x000000010051b58d libglib-2.0.0.dylib`g_main_context_dispatch + 257
frame #7: 0x00007ff8097f6cb7 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
frame #8: 0x00007ff8097f6b54 CoreFoundation`__CFRunLoopDoObservers + 543
frame #9: 0x00007ff8097f6312 CoreFoundation`__CFRunLoopRun + 1652
frame #10: 0x00007ff8097f55dd CoreFoundation`CFRunLoopRunSpecific + 563
frame #11: 0x00000001191c7038 libSDL2-2.0.0.dylib`PLATFORM_hid_enumerate + 81
frame #12: 0x00000001191c8262 libSDL2-2.0.0.dylib`SDL_hid_enumerate_REAL + 44
frame #13: 0x00000001192719d5 libSDL2-2.0.0.dylib`HIDAPI_UpdateDeviceList + 72
frame #14: 0x0000000119271f2e libSDL2-2.0.0.dylib`HIDAPI_JoystickDetect + 42
frame #15: 0x000000011927195f libSDL2-2.0.0.dylib`HIDAPI_JoystickInit + 101
frame #16: 0x00000001191cbd31 libSDL2-2.0.0.dylib`SDL_JoystickInit + 93
frame #17: 0x00000001191a6dd8 libSDL2-2.0.0.dylib`SDL_InitSubSystem_REAL + 265
frame #18: 0x000000011900f50b libgamepad.so`gamepad_open_devices(self=0x00000001140553f0) at gamepad.c:251:6 [opt]
frame #19: 0x00000001009b2e13 libdarktable.dylib`dt_lib_init_module(m=0x00000001140553f0) at lib.c:757:5 [opt]
frame #20: 0x0000000100876eac libdarktable.dylib`dt_module_load_modules(subdir=<unavailable>, module_size=472, load_module_so=(libdarktable.dylib`dt_lib_load_module at lib.c:580), init_module=<unavailable>, sort_modules=(libdarktable.dylib`dt_lib_sort_plugins at lib.c:559)) at module.c:59:21 [opt]
frame #21: 0x00000001009b279c libdarktable.dylib`dt_lib_init(lib=<unavailable>) at lib.c:1045:28 [opt]
frame #22: 0x00000001007fc876 libdarktable.dylib`dt_init(argc=<unavailable>, argv=<unavailable>, init_gui=<unavailable>, load_data=1, L=0x0000000000000000) at darktable.c:1121:5 [opt]
frame #23: 0x000000010000ff3a darktable`main(argc=<unavailable>, argv=<unavailable>) at main.c:92:6 [opt]
frame #24: 0x00000001000214fe dyld`start + 462
```
**To Reproduce**
_Please provide detailed steps to reproduce the behaviour, for example:_
1. delete all shortcutsrc* files in config directory, connect XTouch mini
2. run darktable --> everything is fine
3. close darktable, run again --> crash occurs
4. disconnect Midi device
5. run darktable --> everything is fine
**Which commit introduced the error**
bisecting in 3.8.x branch gave
```
95d625dda15b39cf8c4f0a22c1724055247dd8b0 is the first bad commit
commit 95d625dda15b39cf8c4f0a22c1724055247dd8b0
Author: phweyland <philippe.weyland@laposte.net>
Date: Fri Dec 31 13:50:22 2021 -0300
tags presets: append preset's tags instead of replacing.
fixes #10741
src/libs/tagging.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
```
bisecting in master gave:
```
576828122e8bf88bc24069d6a70cf42bdc423703 is the first bad commit
commit 576828122e8bf88bc24069d6a70cf42bdc423703
Author: Diederik ter Rahe <dterrahe@yahoo.com>
Date: Mon Dec 20 01:11:37 2021 +0100
make notebook pages selected image[s] lib shortcuttable
src/libs/image.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
```
**Platform**
* darktable version :darktable-3.8.0+50~g9716a5a5f.dmg
* OS : OSx
| priority | inputng darktable crashs with x touch mini device on start if arbitrary shortcuts are defined latest changes in inputng introduced following issue darktable crashes on start if shortcuts are defined even no midi shortcuts and the xtocuh mini is connected darktable gtk critical gtk window add accel group assertion gtk is window window failed opened midi device x touch mini via coremidi as libdarktable dylib was compiled with optimization stepping may behave oddly variables may not be available process stopped thread queue com apple main thread stop reason exc bad access code address frame libdarktable dylib process shortcut shortcut match f fb log at accelerators c static gboolean shortcut match dt shortcut t f gchar fb log f views darktable view manager current view view darktable view manager current view gpointer v gint to pointer f views gsequenceiter existing g sequence search darktable control shortcuts f shortcut compare func v target darktable stopped lldb bt thread queue com apple main thread stop reason exc bad access code address frame libdarktable dylib process shortcut shortcut match f fb log at accelerators c frame libdarktable dylib process shortcut move size nan at accelerators c frame libdarktable dylib dt shortcut move id time move size nan at accelerators c frame libmidi so update with move midi timestamp controller move at midi c frame libmidi so timeout midi update user data at midi c frame libglib dylib g timeout dispatch frame libglib dylib g main context dispatch frame corefoundation cfrunloop is calling out to an observer callback function frame corefoundation cfrunloopdoobservers frame corefoundation cfrunlooprun frame corefoundation cfrunlooprunspecific frame dylib platform hid enumerate frame dylib sdl hid enumerate real frame dylib hidapi updatedevicelist frame dylib hidapi joystickdetect frame dylib hidapi joystickinit frame dylib sdl joystickinit frame dylib sdl initsubsystem real frame libgamepad so gamepad open devices self at gamepad c frame libdarktable dylib dt lib init module m at lib c frame libdarktable dylib dt module load modules subdir module size load module so libdarktable dylib dt lib load module at lib c init module sort modules libdarktable dylib dt lib sort plugins at lib c at module c frame libdarktable dylib dt lib init lib at lib c frame libdarktable dylib dt init argc argv init gui load data l at darktable c frame darktable main argc argv at main c frame dyld start to reproduce please provide detailed steps to reproduce the behaviour for example delete all shortcutsrc files in config directory connect xtouch mini run darktable everything is fine close darktable run again crash occurs disconnect midi device run darktable everything is fine which commit introduced the error bisecting in x branch gave is the first bad commit commit author phweyland date fri dec tags presets append preset s tags instead of replacing fixes src libs tagging c file changed insertion deletions bisecting in master gave is the first bad commit commit author diederik ter rahe date mon dec make notebook pages selected image lib shortcuttable src libs image c file changed insertions deletion platform darktable version darktable dmg os osx | 1 |
307,058 | 9,414,155,175 | IssuesEvent | 2019-04-10 09:29:37 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Vechicles must save parameters. | Semi-High Priority Usability suggestion | Losing authorization and owner after pickup\place produce too many uncomfortable situation. e.g. party in mine, and stucked cart, someone in deed picked up cart - all other lose access, etc...
also do not miss text field, name, and attached deed please.
We already have "mint" it save their parameters after pickup\place, can this mechanics to be expanded to vechicles? | 1.0 | Vechicles must save parameters. - Losing authorization and owner after pickup\place produce too many uncomfortable situation. e.g. party in mine, and stucked cart, someone in deed picked up cart - all other lose access, etc...
also do not miss text field, name, and attached deed please.
We already have "mint" it save their parameters after pickup\place, can this mechanics to be expanded to vechicles? | priority | vechicles must save parameters losing authorization and owner after pickup place produce too many uncomfortable situation e g party in mine and stucked cart someone in deed picked up cart all other lose access etc also do not miss text field name and attached deed please we already have mint it save their parameters after pickup place can this mechanics to be expanded to vechicles | 1 |
77,559 | 3,506,960,592 | IssuesEvent | 2016-01-08 10:27:03 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Remove sidebar overlay div | bug in progress Priority: High | The mobile layout has an div that intercepts events for the map. remove it. | 1.0 | Remove sidebar overlay div - The mobile layout has an div that intercepts events for the map. remove it. | priority | remove sidebar overlay div the mobile layout has an div that intercepts events for the map remove it | 1 |
369,036 | 10,887,559,212 | IssuesEvent | 2019-11-18 14:45:50 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Wrong amphtml when static page is selected in reading settings along with ?amp endpoint | Urgent [Priority: HIGH] bug | Wrong amphtml when static page is selected in reading settings along ?amp endpoint
https://monosnap.com/file/6hvaPtPU6yLaNNEmaMQ04DbAgXxws4 | 1.0 | Wrong amphtml when static page is selected in reading settings along with ?amp endpoint - Wrong amphtml when static page is selected in reading settings along ?amp endpoint
https://monosnap.com/file/6hvaPtPU6yLaNNEmaMQ04DbAgXxws4 | priority | wrong amphtml when static page is selected in reading settings along with amp endpoint wrong amphtml when static page is selected in reading settings along amp endpoint | 1 |
557,739 | 16,517,553,411 | IssuesEvent | 2021-05-26 11:22:34 | alifarukyucel/django-web-app | https://api.github.com/repos/alifarukyucel/django-web-app | opened | Create, Update and Edit Posts | backend enhancement priority:high | Add forms for users to create, update and edit posts. Add class-based views. | 1.0 | Create, Update and Edit Posts - Add forms for users to create, update and edit posts. Add class-based views. | priority | create update and edit posts add forms for users to create update and edit posts add class based views | 1 |
177,493 | 6,584,450,776 | IssuesEvent | 2017-09-13 10:14:39 | arquillian/smart-testing | https://api.github.com/repos/arquillian/smart-testing | opened | Should produce reports when debug mode enabled | Component: Maven Priority: High Type: Feature | ##### Issue Overview
When we enable debug mode we should also produce Smart Testing [execution reports](http://arquillian.org/smart-testing/#_reports) during the execution.
| 1.0 | Should produce reports when debug mode enabled - ##### Issue Overview
When we enable debug mode we should also produce Smart Testing [execution reports](http://arquillian.org/smart-testing/#_reports) during the execution.
| priority | should produce reports when debug mode enabled issue overview when we enable debug mode we should also produce smart testing during the execution | 1 |
289,172 | 8,855,518,624 | IssuesEvent | 2019-01-09 06:51:07 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | No parallel hardware acceleration with existing X servers | bug crash likelihood high priority reviewed severity high wrong results | It is not possible to get parallel hardware acceleration with existing X servers. The bug is in the file engine/main/XDisplay.C in function XDisplay::Initialize:[...] //updated: enabling new logic if(!this>launch || (this>xserver = xinit(this>display, user_args)) 1) //if(this>launch && (this>xserver = xinit(this>display, user_args)) 1) { return false; }[...] The "new logic" causes the function to fail if no X server should be launched or if the X server fails to launch. The "old logic" caused the function to fail if a X server should be launched and it fails. So this function always fails when existing X servers should be used. This causes that no attempt is made to connect to the display server, see function Engine::SetupDisplay in engine/main/Engine.C. As a result, the parallel engine fails or falls back to software rendering. My tests with the "old logic" were successful, so I propose to simply revoke the "new logic".
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2672
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: No parallel hardware acceleration with existing X servers
Assigned to: Mark Miller
Category: -
Target version: 2.12.0
Author: John Plate
Start: 08/30/2016
Due date:
% Done: 0%
Estimated time:
Created: 08/30/2016 11:45 am
Updated: 10/25/2016 01:44 pm
Likelihood: 5 - Always
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.3
Impact:
Expected Use:
OS: Linux
Support Group: Any
Description:
It is not possible to get parallel hardware acceleration with existing X servers. The bug is in the file engine/main/XDisplay.C in function XDisplay::Initialize:[...] //updated: enabling new logic if(!this>launch || (this>xserver = xinit(this>display, user_args)) 1) //if(this>launch && (this>xserver = xinit(this>display, user_args)) 1) { return false; }[...] The "new logic" causes the function to fail if no X server should be launched or if the X server fails to launch. The "old logic" caused the function to fail if a X server should be launched and it fails. So this function always fails when existing X servers should be used. This causes that no attempt is made to connect to the display server, see function Engine::SetupDisplay in engine/main/Engine.C. As a result, the parallel engine fails or falls back to software rendering. My tests with the "old logic" were successful, so I propose to simply revoke the "new logic".
Comments:
During further investigation I found that the function XDisplay::Connect only tests the display connection and closes it immediately rather that using it subsequently for opening windows. The function name is misleading; "CheckConnection" would be more intuitive.The question remains why the "old logic" works for me. Currently I assume the reason is that XDisplay:Connect sets the DISPLAY environment variable, which might be used to create the rendering window later.I will continue to investigate ... Further investigations confirmed that the DISPLAY environment variable is used to create the vtkRenderWindow for offscreen rendering. So I still propose to use the "old logic" again. After emailing Hari and Hari explaining that the issue he was seeing may have been related to other activities later on during initialization, I reverted this change.
| 1.0 | No parallel hardware acceleration with existing X servers - It is not possible to get parallel hardware acceleration with existing X servers. The bug is in the file engine/main/XDisplay.C in function XDisplay::Initialize:[...] //updated: enabling new logic if(!this>launch || (this>xserver = xinit(this>display, user_args)) 1) //if(this>launch && (this>xserver = xinit(this>display, user_args)) 1) { return false; }[...] The "new logic" causes the function to fail if no X server should be launched or if the X server fails to launch. The "old logic" caused the function to fail if a X server should be launched and it fails. So this function always fails when existing X servers should be used. This causes that no attempt is made to connect to the display server, see function Engine::SetupDisplay in engine/main/Engine.C. As a result, the parallel engine fails or falls back to software rendering. My tests with the "old logic" were successful, so I propose to simply revoke the "new logic".
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2672
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: No parallel hardware acceleration with existing X servers
Assigned to: Mark Miller
Category: -
Target version: 2.12.0
Author: John Plate
Start: 08/30/2016
Due date:
% Done: 0%
Estimated time:
Created: 08/30/2016 11:45 am
Updated: 10/25/2016 01:44 pm
Likelihood: 5 - Always
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.3
Impact:
Expected Use:
OS: Linux
Support Group: Any
Description:
It is not possible to get parallel hardware acceleration with existing X servers. The bug is in the file engine/main/XDisplay.C in function XDisplay::Initialize:[...] //updated: enabling new logic if(!this>launch || (this>xserver = xinit(this>display, user_args)) 1) //if(this>launch && (this>xserver = xinit(this>display, user_args)) 1) { return false; }[...] The "new logic" causes the function to fail if no X server should be launched or if the X server fails to launch. The "old logic" caused the function to fail if a X server should be launched and it fails. So this function always fails when existing X servers should be used. This causes that no attempt is made to connect to the display server, see function Engine::SetupDisplay in engine/main/Engine.C. As a result, the parallel engine fails or falls back to software rendering. My tests with the "old logic" were successful, so I propose to simply revoke the "new logic".
Comments:
During further investigation I found that the function XDisplay::Connect only tests the display connection and closes it immediately rather that using it subsequently for opening windows. The function name is misleading; "CheckConnection" would be more intuitive.The question remains why the "old logic" works for me. Currently I assume the reason is that XDisplay:Connect sets the DISPLAY environment variable, which might be used to create the rendering window later.I will continue to investigate ... Further investigations confirmed that the DISPLAY environment variable is used to create the vtkRenderWindow for offscreen rendering. So I still propose to use the "old logic" again. After emailing Hari and Hari explaining that the issue he was seeing may have been related to other activities later on during initialization, I reverted this change.
| priority | no parallel hardware acceleration with existing x servers it is not possible to get parallel hardware acceleration with existing x servers the bug is in the file engine main xdisplay c in function xdisplay initialize updated enabling new logic if this launch this xserver xinit this display user args if this launch this xserver xinit this display user args return false the new logic causes the function to fail if no x server should be launched or if the x server fails to launch the old logic caused the function to fail if a x server should be launched and it fails so this function always fails when existing x servers should be used this causes that no attempt is made to connect to the display server see function engine setupdisplay in engine main engine c as a result the parallel engine fails or falls back to software rendering my tests with the old logic were successful so i propose to simply revoke the new logic redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject no parallel hardware acceleration with existing x servers assigned to mark miller category target version author john plate start due date done estimated time created am updated pm likelihood always severity crash wrong results found in version impact expected use os linux support group any description it is not possible to get parallel hardware acceleration with existing x servers the bug is in the file engine main xdisplay c in function xdisplay initialize updated enabling new logic if this launch this xserver xinit this display user args if this launch this xserver xinit this display user args return false the new logic causes the function to fail if no x server should be launched or if the x server fails to launch the old logic caused the function to fail if a x server should be launched and it fails so this function always fails when existing x servers should be used this causes that no attempt is made to connect to the display server see function engine setupdisplay in engine main engine c as a result the parallel engine fails or falls back to software rendering my tests with the old logic were successful so i propose to simply revoke the new logic comments during further investigation i found that the function xdisplay connect only tests the display connection and closes it immediately rather that using it subsequently for opening windows the function name is misleading checkconnection would be more intuitive the question remains why the old logic works for me currently i assume the reason is that xdisplay connect sets the display environment variable which might be used to create the rendering window later i will continue to investigate further investigations confirmed that the display environment variable is used to create the vtkrenderwindow for offscreen rendering so i still propose to use the old logic again after emailing hari and hari explaining that the issue he was seeing may have been related to other activities later on during initialization i reverted this change | 1 |
728,959 | 25,102,592,847 | IssuesEvent | 2022-11-08 14:36:05 | jfr609/architecture-refactoring-helper | https://api.github.com/repos/jfr609/architecture-refactoring-helper | closed | Add and edit scenarios | priority: high feature: new | Add the possibility to create scenarios in a project and determine the importance and difficulty. For this purpose, the Sketch can be used for the interface. These should be editable. | 1.0 | Add and edit scenarios - Add the possibility to create scenarios in a project and determine the importance and difficulty. For this purpose, the Sketch can be used for the interface. These should be editable. | priority | add and edit scenarios add the possibility to create scenarios in a project and determine the importance and difficulty for this purpose the sketch can be used for the interface these should be editable | 1 |
823,664 | 31,029,114,779 | IssuesEvent | 2023-08-10 11:13:04 | dodona-edu/dodona | https://api.github.com/repos/dodona-edu/dodona | closed | Enable additional host names | feature high priority | This issue tracks the steps that need to be taken to switch from dodona.ugent.be to dodona.be. This will be a multi step process.
## Step 1: Enable naos.dodona.be
**Code changes**
- [x] Disable the redirect from naos.dodona.be to naos.ugent.be
- [x] Enable additional host for naos in rails #3292
- [x] Set a relative asset host to host the assets from the same domain as the main request #3293
- [x] Provide different SAML metadata depending on the hostname of the request #3293
- [x] Use correct host name for OIDC requests #3293
- [x] Use correct host name for LTI requests #3293
- [x] Enable additional host for dodona in rails #3293
- [x] Enable /media requests on alternative host names #3293
**Things to check**
- [x] UJS seems to use the default host for remote links
- [x] Check LTI (ufora and ilearn)
- [x] Check UGent sign in
- [x] Check O365 sign in
- [x] Check GSuite sign in
- [x] Check Smartschool sign in
- [x] Check Elixir sign in
- [x] Check Vlaanderen sign in
- [x] Check Surf sign in
- [x] Line 42 of the routes file
**Administrative changes**
- [x] Enable Cloudflare proxy for naos
- [x] Additional redirect URLs were added to Google sign in
- [x] Additional redirect URLs were added to azure
- [x] Register additional SAML service providers in Belnet for the extra hostnames
## Step 2: Enable dodona.be
**Code changes**
- [x] Disable the redirect from dodona.be to dodona.ugent.be
**Things to check**
- [x] UJS seems to use the default host for remote links
- [ ] Check LTI (ufora and ilearn)
- [x] Check UGent sign in
- [x] Check O365 sign in
- [x] Check GSuite sign in
- [x] Check Smartschool sign in
- [x] Check Elixir sign in
- [ ] Check Vlaanderen sign in
- [ ] Check Surf sign in
- [ ] Line 42 of the routes file
- [ ] Check if github webhooks are still working
**Administrative changes**
- [ ] Contact the IDPs we use to add the new hostnames/issuers to the whitelist (can be done at a later time)
## Step 3: Redirect all traffic to the new hostname
**Code changes**
- [x] Redirect dodona.ugent.be to dodona.be
## Step 4: Cleanup
**Code changes**
- [x] Re enable the asset host: https://github.com/dodona-edu/dodona/pull/3293#discussion_r797404955
- [ ] Find-all on the repo and replace dodona.ugent.be by dodona.be where needed (e.g. the sponsor link)
- [x] In the documentation repo replace dodona.ugent.be by dodona.be
I will use this occasion to update our (contact) information about the IDPs we work with in https://github.com/dodona-edu/wiki/blob/main/technical/authentication.md | 1.0 | Enable additional host names - This issue tracks the steps that need to be taken to switch from dodona.ugent.be to dodona.be. This will be a multi step process.
## Step 1: Enable naos.dodona.be
**Code changes**
- [x] Disable the redirect from naos.dodona.be to naos.ugent.be
- [x] Enable additional host for naos in rails #3292
- [x] Set a relative asset host to host the assets from the same domain as the main request #3293
- [x] Provide different SAML metadata depending on the hostname of the request #3293
- [x] Use correct host name for OIDC requests #3293
- [x] Use correct host name for LTI requests #3293
- [x] Enable additional host for dodona in rails #3293
- [x] Enable /media requests on alternative host names #3293
**Things to check**
- [x] UJS seems to use the default host for remote links
- [x] Check LTI (ufora and ilearn)
- [x] Check UGent sign in
- [x] Check O365 sign in
- [x] Check GSuite sign in
- [x] Check Smartschool sign in
- [x] Check Elixir sign in
- [x] Check Vlaanderen sign in
- [x] Check Surf sign in
- [x] Line 42 of the routes file
**Administrative changes**
- [x] Enable Cloudflare proxy for naos
- [x] Additional redirect URLs were added to Google sign in
- [x] Additional redirect URLs were added to azure
- [x] Register additional SAML service providers in Belnet for the extra hostnames
## Step 2: Enable dodona.be
**Code changes**
- [x] Disable the redirect from dodona.be to dodona.ugent.be
**Things to check**
- [x] UJS seems to use the default host for remote links
- [ ] Check LTI (ufora and ilearn)
- [x] Check UGent sign in
- [x] Check O365 sign in
- [x] Check GSuite sign in
- [x] Check Smartschool sign in
- [x] Check Elixir sign in
- [ ] Check Vlaanderen sign in
- [ ] Check Surf sign in
- [ ] Line 42 of the routes file
- [ ] Check if github webhooks are still working
**Administrative changes**
- [ ] Contact the IDPs we use to add the new hostnames/issuers to the whitelist (can be done at a later time)
## Step 3: Redirect all traffic to the new hostname
**Code changes**
- [x] Redirect dodona.ugent.be to dodona.be
## Step 4: Cleanup
**Code changes**
- [x] Re enable the asset host: https://github.com/dodona-edu/dodona/pull/3293#discussion_r797404955
- [ ] Find-all on the repo and replace dodona.ugent.be by dodona.be where needed (e.g. the sponsor link)
- [x] In the documentation repo replace dodona.ugent.be by dodona.be
I will use this occasion to update our (contact) information about the IDPs we work with in https://github.com/dodona-edu/wiki/blob/main/technical/authentication.md | priority | enable additional host names this issue tracks the steps that need to be taken to switch from dodona ugent be to dodona be this will be a multi step process step enable naos dodona be code changes disable the redirect from naos dodona be to naos ugent be enable additional host for naos in rails set a relative asset host to host the assets from the same domain as the main request provide different saml metadata depending on the hostname of the request use correct host name for oidc requests use correct host name for lti requests enable additional host for dodona in rails enable media requests on alternative host names things to check ujs seems to use the default host for remote links check lti ufora and ilearn check ugent sign in check sign in check gsuite sign in check smartschool sign in check elixir sign in check vlaanderen sign in check surf sign in line of the routes file administrative changes enable cloudflare proxy for naos additional redirect urls were added to google sign in additional redirect urls were added to azure register additional saml service providers in belnet for the extra hostnames step enable dodona be code changes disable the redirect from dodona be to dodona ugent be things to check ujs seems to use the default host for remote links check lti ufora and ilearn check ugent sign in check sign in check gsuite sign in check smartschool sign in check elixir sign in check vlaanderen sign in check surf sign in line of the routes file check if github webhooks are still working administrative changes contact the idps we use to add the new hostnames issuers to the whitelist can be done at a later time step redirect all traffic to the new hostname code changes redirect dodona ugent be to dodona be step cleanup code changes re enable the asset host find all on the repo and replace dodona ugent be by dodona be where needed e g the sponsor link in the documentation repo replace dodona ugent be by dodona be i will use this occasion to update our contact information about the idps we work with in | 1 |
307,405 | 9,416,701,734 | IssuesEvent | 2019-04-10 15:10:38 | IBM/gWhisper | https://api.github.com/repos/IBM/gWhisper | closed | Empty messages in request do not work | High Priority bug | **Describe the bug**
RPCs with empty request message cannot be called.
**To Reproduce**
against test server execute:
`./gwhisper 127.0.0.1 examples.NestedTypeRpcs getTime`
**Expected behavior**
Expected rpc to succeed, however I observed
```
Parse failed. Parsed until: '127.0.0.1 examples.NestedTypeRpcs getTime'
Possible Candidates:
'127.0.0.1 examples.NestedTypeRpcs getTime '
```
| 1.0 | Empty messages in request do not work - **Describe the bug**
RPCs with empty request message cannot be called.
**To Reproduce**
against test server execute:
`./gwhisper 127.0.0.1 examples.NestedTypeRpcs getTime`
**Expected behavior**
Expected rpc to succeed, however I observed
```
Parse failed. Parsed until: '127.0.0.1 examples.NestedTypeRpcs getTime'
Possible Candidates:
'127.0.0.1 examples.NestedTypeRpcs getTime '
```
| priority | empty messages in request do not work describe the bug rpcs with empty request message cannot be called to reproduce against test server execute gwhisper examples nestedtyperpcs gettime expected behavior expected rpc to succeed however i observed parse failed parsed until examples nestedtyperpcs gettime possible candidates examples nestedtyperpcs gettime | 1 |
502,280 | 14,543,605,779 | IssuesEvent | 2020-12-15 17:04:46 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Full screen widgets | Accepted C040-2020-COMUNE_GE-SUPPORT Priority: High enhancement | ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
It is required to add a control to be able to expand widgets to full screen (in both in viewer and dashboard). The widget must cover the entire viewer space when expanded. The expand tool can be included beside the other ones like (see image below).

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [x] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| 1.0 | Full screen widgets - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
It is required to add a control to be able to expand widgets to full screen (in both in viewer and dashboard). The widget must cover the entire viewer space when expanded. The expand tool can be included beside the other ones like (see image below).

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [x] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| priority | full screen widgets description it is required to add a control to be able to expand widgets to full screen in both in viewer and dashboard the widget must cover the entire viewer space when expanded the expand tool can be included beside the other ones like see image below what kind of improvement you want to add check one with x remove the others minor changes to existing features code style update formatting local variables refactoring no functional changes no api changes build related changes ci related changes other please describe other useful information | 1 |
93,990 | 3,917,617,459 | IssuesEvent | 2016-04-21 09:02:53 | brian-team/brian2 | https://api.github.com/repos/brian-team/brian2 | closed | Repeated `Network.restore` calls fail | bug easy high priority in progress | The following code will fail at the second restore:
```Python
In [1]: from brian2 import *
In [2]: G = NeuronGroup(1, 'v: 1')
In [3]: store()
In [4]: restore()
In [5]: restore()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-ad7bb0bcaaad> in <module>()
----> 1 restore()
/home/marcel/programming/brian2/brian2/core/magic.pyc in restore(name, filename)
411 Network.restore
412 '''
--> 413 magic_network.restore(name=name, filename=filename, level=1)
414
415
/home/marcel/programming/brian2/brian2/core/magic.pyc in restore(self, name, filename, level)
244 '''
245 self._update_magic_objects(level=level+1)
--> 246 super(MagicNetwork, self).restore(name=name, filename=filename)
247 self.objects[:] = []
248
/home/marcel/programming/brian2/brian2/core/base.pyc in device_override_decorated_function(*args, **kwds)
276 return getattr(curdev, name)(*args, **kwds)
277 else:
--> 278 return func(*args, **kwds)
279
280 device_override_decorated_function.__doc__ = func.__doc__
/home/marcel/programming/brian2/brian2/core/network.pyc in restore(self, name, filename)
431 with open(filename, 'rb') as f:
432 state = pickle.load(f)[name]
--> 433 self.t_ = state.pop('0_t')
434 clocks = set([obj.clock for obj in self.objects])
435 restored_objects = set()
KeyError: '0_t'
```
The problem seems to be the `state.pop` which pops the `0_t` value from the stored state (instead of e.g. popping it from a copy). | 1.0 | Repeated `Network.restore` calls fail - The following code will fail at the second restore:
```Python
In [1]: from brian2 import *
In [2]: G = NeuronGroup(1, 'v: 1')
In [3]: store()
In [4]: restore()
In [5]: restore()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-ad7bb0bcaaad> in <module>()
----> 1 restore()
/home/marcel/programming/brian2/brian2/core/magic.pyc in restore(name, filename)
411 Network.restore
412 '''
--> 413 magic_network.restore(name=name, filename=filename, level=1)
414
415
/home/marcel/programming/brian2/brian2/core/magic.pyc in restore(self, name, filename, level)
244 '''
245 self._update_magic_objects(level=level+1)
--> 246 super(MagicNetwork, self).restore(name=name, filename=filename)
247 self.objects[:] = []
248
/home/marcel/programming/brian2/brian2/core/base.pyc in device_override_decorated_function(*args, **kwds)
276 return getattr(curdev, name)(*args, **kwds)
277 else:
--> 278 return func(*args, **kwds)
279
280 device_override_decorated_function.__doc__ = func.__doc__
/home/marcel/programming/brian2/brian2/core/network.pyc in restore(self, name, filename)
431 with open(filename, 'rb') as f:
432 state = pickle.load(f)[name]
--> 433 self.t_ = state.pop('0_t')
434 clocks = set([obj.clock for obj in self.objects])
435 restored_objects = set()
KeyError: '0_t'
```
The problem seems to be the `state.pop` which pops the `0_t` value from the stored state (instead of e.g. popping it from a copy). | priority | repeated network restore calls fail the following code will fail at the second restore python in from import in g neurongroup v in store in restore in restore keyerror traceback most recent call last in restore home marcel programming core magic pyc in restore name filename network restore magic network restore name name filename filename level home marcel programming core magic pyc in restore self name filename level self update magic objects level level super magicnetwork self restore name name filename filename self objects home marcel programming core base pyc in device override decorated function args kwds return getattr curdev name args kwds else return func args kwds device override decorated function doc func doc home marcel programming core network pyc in restore self name filename with open filename rb as f state pickle load f self t state pop t clocks set restored objects set keyerror t the problem seems to be the state pop which pops the t value from the stored state instead of e g popping it from a copy | 1 |
103,686 | 4,183,656,556 | IssuesEvent | 2016-06-23 01:33:45 | DistrictDataLabs/tribe | https://api.github.com/repos/DistrictDataLabs/tribe | closed | Better error handling | priority: high type: bug | Make sure that Tribe only fails silently if it can't parse an email (rather than crashing completely) This usually involves encoding errors. | 1.0 | Better error handling - Make sure that Tribe only fails silently if it can't parse an email (rather than crashing completely) This usually involves encoding errors. | priority | better error handling make sure that tribe only fails silently if it can t parse an email rather than crashing completely this usually involves encoding errors | 1 |
61,282 | 3,143,597,059 | IssuesEvent | 2015-09-14 08:13:57 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | EventList operator= incomplete | Component: Framework Misc: Bugfix Misc: Core Priority: High | The assignment operator `EventList::operator=` seems to be incomplete: It does not copy the members `m_specNo` and `refDx` that are part of the base class `ISpectrum`.
The copy constructor of EventList is based on the assignment operator, so it suffers from the same problem.
I do not know of any existing issues caused by this, but on the other hand `EventList` is in use everywhere... | 1.0 | EventList operator= incomplete - The assignment operator `EventList::operator=` seems to be incomplete: It does not copy the members `m_specNo` and `refDx` that are part of the base class `ISpectrum`.
The copy constructor of EventList is based on the assignment operator, so it suffers from the same problem.
I do not know of any existing issues caused by this, but on the other hand `EventList` is in use everywhere... | priority | eventlist operator incomplete the assignment operator eventlist operator seems to be incomplete it does not copy the members m specno and refdx that are part of the base class ispectrum the copy constructor of eventlist is based on the assignment operator so it suffers from the same problem i do not know of any existing issues caused by this but on the other hand eventlist is in use everywhere | 1 |
261,183 | 8,227,633,760 | IssuesEvent | 2018-09-07 00:09:03 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Problem with players who did not complete tutorial in existing world when updated to 7.7 | Medium-High Priority | Getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world (not a new world) although most players were not back at start and had all their items, inventory, houses, land etc, started over. It seems like its related to tutorial completion
I started a world in 7.6.3 and created a player and did part of the tutorial, about 6 steps into it. Then I updated to 7.7 and I was able to load in fine - however I did end up starting over at the player creation screen. I can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over
A couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all, no dump file and the console gave no error (after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial) they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial. Since these crash were not making a crash report not sure if its the same issue or not.
I am not able to repro that a player with a player with an incomplete tutorial will block a game from loading. I tried several times, even taking a character right to the last two steps in the tutorial but the only thing that happened was that I was set back to the character creation screen in the updated world. All the worlds I tried this on the player just started over. Its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see. I did not have time to test all the other possibilities that brought up, as they did take tools out of the tent, I had not, some had build houses, I had not and so on. | 1.0 | Problem with players who did not complete tutorial in existing world when updated to 7.7 - Getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world (not a new world) although most players were not back at start and had all their items, inventory, houses, land etc, started over. It seems like its related to tutorial completion
I started a world in 7.6.3 and created a player and did part of the tutorial, about 6 steps into it. Then I updated to 7.7 and I was able to load in fine - however I did end up starting over at the player creation screen. I can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over
A couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all, no dump file and the console gave no error (after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial) they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial. Since these crash were not making a crash report not sure if its the same issue or not.
I am not able to repro that a player with a player with an incomplete tutorial will block a game from loading. I tried several times, even taking a character right to the last two steps in the tutorial but the only thing that happened was that I was set back to the character creation screen in the updated world. All the worlds I tried this on the player just started over. Its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see. I did not have time to test all the other possibilities that brought up, as they did take tools out of the tent, I had not, some had build houses, I had not and so on. | priority | problem with players who did not complete tutorial in existing world when updated to getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world not a new world although most players were not back at start and had all their items inventory houses land etc started over it seems like its related to tutorial completion i started a world in and created a player and did part of the tutorial about steps into it then i updated to and i was able to load in fine however i did end up starting over at the player creation screen i can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over a couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all no dump file and the console gave no error after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial since these crash were not making a crash report not sure if its the same issue or not i am not able to repro that a player with a player with an incomplete tutorial will block a game from loading i tried several times even taking a character right to the last two steps in the tutorial but the only thing that happened was that i was set back to the character creation screen in the updated world all the worlds i tried this on the player just started over its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see i did not have time to test all the other possibilities that brought up as they did take tools out of the tent i had not some had build houses i had not and so on | 1 |
153,083 | 5,874,457,390 | IssuesEvent | 2017-05-15 16:05:15 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Conversions Suggestion: (POWER) - Add Missing Units | Category: Highest Impact Tasks Difficulty: Low Improvement Priority: High Skill: JavaScript Suggestion Topic: Conversions | ## Problem
<!-- Describe the bug or suggestion in detail -->
We recently migrated the conversions API into an interactive UI. However, some units were ditched for various reasons and the legacy code didn't necessarily support all of the units that were desired from users. This is a good opportunity to take the lead in the search engine conversions game.

## Solution
<!-- Describe the steps, or provide a link to an example search -->
1. Add some front-end tests to [spec/conversions_spec.js](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/spec/conversions_spec.js). This will test the custom units
2. Add some back-end tests to [t/Conversions.t](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/t/Conversions.t). This will test the triggering.
3. Update the triggering in [triggers.yml](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/triggers.yml)
4. Add the custom unit to [Conversions.js (Line 45)](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/conversions.js#L45) and add it to the Units Object in [Conversions.js (Line 166)](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/conversions.js#L166)
### UNITS
- [x] Kilowatt
- [x] Megawatt
- [x] Gigawatt
- [x] Terawatt
- [x] Petawatt
- [x] Exawatt
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
## Get Started
- [x] 1) Claim this issue by commenting below
- [x] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/CONTRIBUTING.md)
- [x] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [x] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/conversions
<!-- FILL THIS IN: ^^^^ --> | 1.0 | Conversions Suggestion: (POWER) - Add Missing Units - ## Problem
<!-- Describe the bug or suggestion in detail -->
We recently migrated the conversions API into an interactive UI. However, some units were ditched for various reasons and the legacy code didn't necessarily support all of the units that were desired from users. This is a good opportunity to take the lead in the search engine conversions game.

## Solution
<!-- Describe the steps, or provide a link to an example search -->
1. Add some front-end tests to [spec/conversions_spec.js](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/spec/conversions_spec.js). This will test the custom units
2. Add some back-end tests to [t/Conversions.t](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/t/Conversions.t). This will test the triggering.
3. Update the triggering in [triggers.yml](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/triggers.yml)
4. Add the custom unit to [Conversions.js (Line 45)](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/conversions.js#L45) and add it to the Units Object in [Conversions.js (Line 166)](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/share/goodie/conversions/conversions.js#L166)
### UNITS
- [x] Kilowatt
- [x] Megawatt
- [x] Gigawatt
- [x] Terawatt
- [x] Petawatt
- [x] Exawatt
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
## Get Started
- [x] 1) Claim this issue by commenting below
- [x] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-goodies/blob/master/CONTRIBUTING.md)
- [x] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [x] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/conversions
<!-- FILL THIS IN: ^^^^ --> | priority | conversions suggestion power add missing units problem we recently migrated the conversions api into an interactive ui however some units were ditched for various reasons and the legacy code didn t necessarily support all of the units that were desired from users this is a good opportunity to take the lead in the search engine conversions game solution add some front end tests to this will test the custom units add some back end tests to this will test the triggering update the triggering in add the custom unit to and add it to the units object in units kilowatt megawatt gigawatt terawatt petawatt exawatt people to notify get started claim this issue by commenting below review our and fork this repository create a pull request resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help instant answer page | 1 |
82,407 | 3,606,356,228 | IssuesEvent | 2016-02-04 10:53:32 | cfpb/cfgov-refresh | https://api.github.com/repos/cfpb/cfgov-refresh | reopened | Information for servicemembers - Featured Content Molecule - multi link issues | display-bug FEWD priority: high | The two links in the FCM on this page should visually look the same. The first link seems to be lighter weight and also does not get the full mobile link style on smaller screens. Both links should look the same at all screen sizes.
|Large screen|Small screen|
---|---
|<img width="832" alt="screen shot 2016-01-27 at 2 35 19 pm" src="https://cloud.githubusercontent.com/assets/6873734/12625909/e21aa148-c503-11e5-88c2-dc1cb64386fd.png">|<img width="385" alt="screen shot 2016-01-27 at 2 35 41 pm" src="https://cloud.githubusercontent.com/assets/6873734/12625912/e42889c8-c503-11e5-8b6d-23c378ec74ef.png">|
| 1.0 | Information for servicemembers - Featured Content Molecule - multi link issues - The two links in the FCM on this page should visually look the same. The first link seems to be lighter weight and also does not get the full mobile link style on smaller screens. Both links should look the same at all screen sizes.
|Large screen|Small screen|
---|---
|<img width="832" alt="screen shot 2016-01-27 at 2 35 19 pm" src="https://cloud.githubusercontent.com/assets/6873734/12625909/e21aa148-c503-11e5-88c2-dc1cb64386fd.png">|<img width="385" alt="screen shot 2016-01-27 at 2 35 41 pm" src="https://cloud.githubusercontent.com/assets/6873734/12625912/e42889c8-c503-11e5-8b6d-23c378ec74ef.png">|
| priority | information for servicemembers featured content molecule multi link issues the two links in the fcm on this page should visually look the same the first link seems to be lighter weight and also does not get the full mobile link style on smaller screens both links should look the same at all screen sizes large screen small screen img width alt screen shot at pm src width alt screen shot at pm src | 1 |
389,725 | 11,516,369,773 | IssuesEvent | 2020-02-14 04:47:46 | Mounceph99/soen341_project | https://api.github.com/repos/Mounceph99/soen341_project | closed | Create a LEAVE A COMMENT box component for leaving comments on posts | 5 Points high priority user story (sub) | Below every picture displayed on the feed, there is a box that displays the message “Leave a comment”. If the user clicks on this box, they will be prompted to type a message. The user can then click the “Comment” button to add their message to the list of comments or the “Cancel” button to stop writing a message. Once a comment is left on a post, it will appear below the picture and will be viewable by everyone who looks at that picture. #14 | 1.0 | Create a LEAVE A COMMENT box component for leaving comments on posts - Below every picture displayed on the feed, there is a box that displays the message “Leave a comment”. If the user clicks on this box, they will be prompted to type a message. The user can then click the “Comment” button to add their message to the list of comments or the “Cancel” button to stop writing a message. Once a comment is left on a post, it will appear below the picture and will be viewable by everyone who looks at that picture. #14 | priority | create a leave a comment box component for leaving comments on posts below every picture displayed on the feed there is a box that displays the message “leave a comment” if the user clicks on this box they will be prompted to type a message the user can then click the “comment” button to add their message to the list of comments or the “cancel” button to stop writing a message once a comment is left on a post it will appear below the picture and will be viewable by everyone who looks at that picture | 1 |
312,312 | 9,545,676,696 | IssuesEvent | 2019-05-01 17:45:41 | zephyrproject-rtos/meta-zephyr-sdk | https://api.github.com/repos/zephyrproject-rtos/meta-zephyr-sdk | closed | Use prebuilt vendor toolchains instead of building from scratch | area: Toolchains enhancement priority: high | **_Reported by Anas Nashif:_**
# Rationale
In the Zephyr SDK we build all toolchains from source and have to play catch-up with features and hardware support provided by vendor supported toolchains that are available for multiple platforms already.
Users usually skip the Zephyr SDK altogether and use the pre-built binaries from ARM, ISSM or ARC introducing an overhead on the level of support we have for toolchains.
# Proposal
Wherever possible and when compatibility with Zephyr is tested and verified, use pre-built toolchains and integrate them in the single binary Zephyr SDK. This also allows us to provide SDK drops (with limited architecture support initially) on Mac and Windows.
Toolchains to be integrated and test:
- ARM: https://launchpad.net/gcc-arm-embedded
- X86 IAMCU: https://software.intel.com/en-us/articles/issm-toolchain-only-download
- ARC: https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain
# Impact
High
(Imported from Jira SDK-24) | 1.0 | Use prebuilt vendor toolchains instead of building from scratch - **_Reported by Anas Nashif:_**
# Rationale
In the Zephyr SDK we build all toolchains from source and have to play catch-up with features and hardware support provided by vendor supported toolchains that are available for multiple platforms already.
Users usually skip the Zephyr SDK altogether and use the pre-built binaries from ARM, ISSM or ARC introducing an overhead on the level of support we have for toolchains.
# Proposal
Wherever possible and when compatibility with Zephyr is tested and verified, use pre-built toolchains and integrate them in the single binary Zephyr SDK. This also allows us to provide SDK drops (with limited architecture support initially) on Mac and Windows.
Toolchains to be integrated and test:
- ARM: https://launchpad.net/gcc-arm-embedded
- X86 IAMCU: https://software.intel.com/en-us/articles/issm-toolchain-only-download
- ARC: https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain
# Impact
High
(Imported from Jira SDK-24) | priority | use prebuilt vendor toolchains instead of building from scratch reported by anas nashif rationale in the zephyr sdk we build all toolchains from source and have to play catch up with features and hardware support provided by vendor supported toolchains that are available for multiple platforms already users usually skip the zephyr sdk altogether and use the pre built binaries from arm issm or arc introducing an overhead on the level of support we have for toolchains proposal wherever possible and when compatibility with zephyr is tested and verified use pre built toolchains and integrate them in the single binary zephyr sdk this also allows us to provide sdk drops with limited architecture support initially on mac and windows toolchains to be integrated and test arm iamcu arc impact high imported from jira sdk | 1 |
74,393 | 3,439,305,525 | IssuesEvent | 2015-12-14 08:51:07 | Hamcha/maud | https://api.github.com/repos/Hamcha/maud | closed | SSL Proxy | enhancement high priority partially done | Proxy SSL per risorse esterne (immagini / webm)
TODO:
- [x] Download e retrieve di risorse arbitrarie
- [x] Integrazione via formatter
- [x] Rimozione di contenuti vecchi (via cronjob)
EXTRA:
- [x] Pulizia se si supera una soglia massima di spazio
- [x] Pre-scaling delle immagini nel thread (per evitare reflow continui durante il caricamento)
- [ ] Info sulle immagini (dimensione) in Light mode
- [x] Client-side opt-in (opt-out?) via cookie
- [ ] Limite immagini per evitare DOS | 1.0 | SSL Proxy - Proxy SSL per risorse esterne (immagini / webm)
TODO:
- [x] Download e retrieve di risorse arbitrarie
- [x] Integrazione via formatter
- [x] Rimozione di contenuti vecchi (via cronjob)
EXTRA:
- [x] Pulizia se si supera una soglia massima di spazio
- [x] Pre-scaling delle immagini nel thread (per evitare reflow continui durante il caricamento)
- [ ] Info sulle immagini (dimensione) in Light mode
- [x] Client-side opt-in (opt-out?) via cookie
- [ ] Limite immagini per evitare DOS | priority | ssl proxy proxy ssl per risorse esterne immagini webm todo download e retrieve di risorse arbitrarie integrazione via formatter rimozione di contenuti vecchi via cronjob extra pulizia se si supera una soglia massima di spazio pre scaling delle immagini nel thread per evitare reflow continui durante il caricamento info sulle immagini dimensione in light mode client side opt in opt out via cookie limite immagini per evitare dos | 1 |
359,897 | 10,682,312,074 | IssuesEvent | 2019-10-22 04:47:53 | wso2/kubernetes-open-banking | https://api.github.com/repos/wso2/kubernetes-open-banking | reopened | Create Kubernetes artifacts for WSO2 Open Banking API Manager Service | Priority/High Type/New Feature | **Description:**
Create Kubernetes artifacts for WSO2 Open Banking API Manager service. For this purpose, Kubernetes Service [1] definition(s) need to be defined.
[1]: [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/) | 1.0 | Create Kubernetes artifacts for WSO2 Open Banking API Manager Service - **Description:**
Create Kubernetes artifacts for WSO2 Open Banking API Manager service. For this purpose, Kubernetes Service [1] definition(s) need to be defined.
[1]: [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/) | priority | create kubernetes artifacts for open banking api manager service description create kubernetes artifacts for open banking api manager service for this purpose kubernetes service definition s need to be defined | 1 |
203,516 | 7,065,260,927 | IssuesEvent | 2018-01-06 17:51:35 | domcermak/progtest | https://api.github.com/repos/domcermak/progtest | closed | Tester | high priority improvement | - [ ] more flag support (--help, -cpp, -c, ... )
- [ ] flag order independence
- [ ] propably created in C or C++
- [ ] on github code and also compiled version | 1.0 | Tester - - [ ] more flag support (--help, -cpp, -c, ... )
- [ ] flag order independence
- [ ] propably created in C or C++
- [ ] on github code and also compiled version | priority | tester more flag support help cpp c flag order independence propably created in c or c on github code and also compiled version | 1 |
281,758 | 8,698,961,103 | IssuesEvent | 2018-12-05 01:48:27 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | message view: Admin should always be able to edit topic from actions menu. | area: message-editing bug help wanted priority: high | Logged in as Iago:

The first entry should be `View source / Edit topic`, and it should allow editing the topic when you click on it. (When `Allow message editing` is something other than `Never`.)
Looks like this regardless of whether `Users can edit the topic of any message` is checked.
| 1.0 | message view: Admin should always be able to edit topic from actions menu. - Logged in as Iago:

The first entry should be `View source / Edit topic`, and it should allow editing the topic when you click on it. (When `Allow message editing` is something other than `Never`.)
Looks like this regardless of whether `Users can edit the topic of any message` is checked.
| priority | message view admin should always be able to edit topic from actions menu logged in as iago the first entry should be view source edit topic and it should allow editing the topic when you click on it when allow message editing is something other than never looks like this regardless of whether users can edit the topic of any message is checked | 1 |
44,932 | 2,918,815,026 | IssuesEvent | 2015-06-24 10:32:19 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | opened | Separate functions to load Embedded IDF and load Embedded Parameters | Core Framework High Priority NeXus | At present, one function ExperimentInfo::loadInstrumentInfoNexus loads both the Embedded Nexus file and the Embedded Parameters. The code for these two tasks needs to be put into two separate functions called within this function to enable issue #12255 can be dealt with. | 1.0 | Separate functions to load Embedded IDF and load Embedded Parameters - At present, one function ExperimentInfo::loadInstrumentInfoNexus loads both the Embedded Nexus file and the Embedded Parameters. The code for these two tasks needs to be put into two separate functions called within this function to enable issue #12255 can be dealt with. | priority | separate functions to load embedded idf and load embedded parameters at present one function experimentinfo loadinstrumentinfonexus loads both the embedded nexus file and the embedded parameters the code for these two tasks needs to be put into two separate functions called within this function to enable issue can be dealt with | 1 |
454,187 | 13,096,229,735 | IssuesEvent | 2020-08-03 15:20:56 | juntofoundation/junto-mobile | https://api.github.com/repos/juntofoundation/junto-mobile | closed | When you're creating an expression and you click the lotus, there needs to be a warning that everything you've typed/created will be deleted (you may not know that that's what's happening when you just click on the lotus)) | High Priority bug | (and eventually we would have an option to save this expression as a draft) | 1.0 | When you're creating an expression and you click the lotus, there needs to be a warning that everything you've typed/created will be deleted (you may not know that that's what's happening when you just click on the lotus)) - (and eventually we would have an option to save this expression as a draft) | priority | when you re creating an expression and you click the lotus there needs to be a warning that everything you ve typed created will be deleted you may not know that that s what s happening when you just click on the lotus and eventually we would have an option to save this expression as a draft | 1 |
390,949 | 11,566,119,511 | IssuesEvent | 2020-02-20 11:51:35 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Buy participation tokens > My REP balance deducts after about a minute but my participation tokens don't show up here | Bug Needed for V2 launch Priority: High | **Read to end of ticket**
Here it should show 11

it is updating in the total PT tokens purchased but not showing that I own them

After pushing time forward a week it's now working correctly | 1.0 | Buy participation tokens > My REP balance deducts after about a minute but my participation tokens don't show up here - **Read to end of ticket**
Here it should show 11

it is updating in the total PT tokens purchased but not showing that I own them

After pushing time forward a week it's now working correctly | priority | buy participation tokens my rep balance deducts after about a minute but my participation tokens don t show up here read to end of ticket here it should show it is updating in the total pt tokens purchased but not showing that i own them after pushing time forward a week it s now working correctly | 1 |
420,444 | 12,238,181,585 | IssuesEvent | 2020-05-04 19:20:51 | balena-io/balena-supervisor | https://api.github.com/repos/balena-io/balena-supervisor | reopened | Supervisor prints a new Buffer deprecation warning with node10 | High priority Needs more investigation type/bug | This comes from the following: https://github.com/sidorares/dbus-native/issues/271#issuecomment-580262777
The fix it seems would be to move away from dbus-native. A possible replacement is https://github.com/Shouqun/node-dbus which does not rely on `put`. | 1.0 | Supervisor prints a new Buffer deprecation warning with node10 - This comes from the following: https://github.com/sidorares/dbus-native/issues/271#issuecomment-580262777
The fix it seems would be to move away from dbus-native. A possible replacement is https://github.com/Shouqun/node-dbus which does not rely on `put`. | priority | supervisor prints a new buffer deprecation warning with this comes from the following the fix it seems would be to move away from dbus native a possible replacement is which does not rely on put | 1 |
202,999 | 7,057,102,738 | IssuesEvent | 2018-01-04 15:19:47 | twosigma/beakerx | https://api.github.com/repos/twosigma/beakerx | closed | [IPKernelApp] ERROR | No such comm target registered: beaker.autotranslation | Bug Notebook Extension Priority High | ```
[IPKernelApp] ERROR | No such comm target registered: beaker.autotranslation
```
i get this when i open doc/groovy/EasyForm.ipynb | 1.0 | [IPKernelApp] ERROR | No such comm target registered: beaker.autotranslation - ```
[IPKernelApp] ERROR | No such comm target registered: beaker.autotranslation
```
i get this when i open doc/groovy/EasyForm.ipynb | priority | error no such comm target registered beaker autotranslation error no such comm target registered beaker autotranslation i get this when i open doc groovy easyform ipynb | 1 |
653,362 | 21,580,354,025 | IssuesEvent | 2022-05-02 18:01:51 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Select-All and type in a XB RTE that is filled with content to max chars does not work as expected | bug priority: high CI validate | ### Bug Report
#### Crafter CMS Version
4.0.0-SNAPSHOT
#### Date of Build
4/19/2022
#### Describe the bug
Select-All and type in a XB RTE that is filled with content to max chars does not work as expected
Selecting All content in a RTE field that is filled with its max char count fires snack events on every key click.
There are two issues here:
1. One snack is sufficient
2. More importantly typing should cause the selected content to be replaced with the typed characters
#### To Reproduce
Steps to reproduce the behavior:
1. Create and RTE in XB with a max car count and do not set a default (the system will fill the RTE for you)
3. In XB click into the RTE and Select-All on the cotnent
4. Type
5. See the error
#### Logs
N/A
#### Screenshots
N/A
| 1.0 | [studio-ui] Select-All and type in a XB RTE that is filled with content to max chars does not work as expected - ### Bug Report
#### Crafter CMS Version
4.0.0-SNAPSHOT
#### Date of Build
4/19/2022
#### Describe the bug
Select-All and type in a XB RTE that is filled with content to max chars does not work as expected
Selecting All content in a RTE field that is filled with its max char count fires snack events on every key click.
There are two issues here:
1. One snack is sufficient
2. More importantly typing should cause the selected content to be replaced with the typed characters
#### To Reproduce
Steps to reproduce the behavior:
1. Create and RTE in XB with a max car count and do not set a default (the system will fill the RTE for you)
3. In XB click into the RTE and Select-All on the cotnent
4. Type
5. See the error
#### Logs
N/A
#### Screenshots
N/A
| priority | select all and type in a xb rte that is filled with content to max chars does not work as expected bug report crafter cms version snapshot date of build describe the bug select all and type in a xb rte that is filled with content to max chars does not work as expected selecting all content in a rte field that is filled with its max char count fires snack events on every key click there are two issues here one snack is sufficient more importantly typing should cause the selected content to be replaced with the typed characters to reproduce steps to reproduce the behavior create and rte in xb with a max car count and do not set a default the system will fill the rte for you in xb click into the rte and select all on the cotnent type see the error logs n a screenshots n a | 1 |
272,573 | 8,515,036,056 | IssuesEvent | 2018-10-31 20:20:46 | NicholasNagy/Parent_Teacher_Web_App | https://api.github.com/repos/NicholasNagy/Parent_Teacher_Web_App | closed | Capability to add friends and go to friend's walls | High Priority | **User Story:**
As a user, I want to be able to post to other people's walls.
**Description:**
Add functionality to add friends using emails. Add functionality to see all of friends, with links, so that you can access friends' walls when you click on it.
Prerequisite: Issue #71
Points: 5 | 1.0 | Capability to add friends and go to friend's walls - **User Story:**
As a user, I want to be able to post to other people's walls.
**Description:**
Add functionality to add friends using emails. Add functionality to see all of friends, with links, so that you can access friends' walls when you click on it.
Prerequisite: Issue #71
Points: 5 | priority | capability to add friends and go to friend s walls user story as a user i want to be able to post to other people s walls description add functionality to add friends using emails add functionality to see all of friends with links so that you can access friends walls when you click on it prerequisite issue points | 1 |
505,627 | 14,643,149,285 | IssuesEvent | 2020-12-25 14:51:53 | webpack/webpack-cli | https://api.github.com/repos/webpack/webpack-cli | closed | [Feature]: Implement api for reporters and implement `stylish` reporter | Feature Priority: High Semver: minor ⚙️ enhancement | **Is your feature request related to a problem? Please describe.**
Implement api for `reporter` to allow people choose webpack output.
**Describe the solution you'd like**
Just add option `--reporter [name]`/`--reporter path/to/reporter.js`.
Also have built-in reporters, example `standard` (maybe best name) and `stylish`.
**Describe alternatives you've considered**
Now we have only https://github.com/webpack-contrib/webpack-stylish as plugin. It is bad solution, reporters should be part of `webpack-cli`.
**Additional context**
Link on https://github.com/webpack-contrib/webpack-stylish
In near future i will archived `webpack-stylish` and `webpack-command` repos to avoid misleading. | 1.0 | [Feature]: Implement api for reporters and implement `stylish` reporter - **Is your feature request related to a problem? Please describe.**
Implement api for `reporter` to allow people choose webpack output.
**Describe the solution you'd like**
Just add option `--reporter [name]`/`--reporter path/to/reporter.js`.
Also have built-in reporters, example `standard` (maybe best name) and `stylish`.
**Describe alternatives you've considered**
Now we have only https://github.com/webpack-contrib/webpack-stylish as plugin. It is bad solution, reporters should be part of `webpack-cli`.
**Additional context**
Link on https://github.com/webpack-contrib/webpack-stylish
In near future i will archived `webpack-stylish` and `webpack-command` repos to avoid misleading. | priority | implement api for reporters and implement stylish reporter is your feature request related to a problem please describe implement api for reporter to allow people choose webpack output describe the solution you d like just add option reporter reporter path to reporter js also have built in reporters example standard maybe best name and stylish describe alternatives you ve considered now we have only as plugin it is bad solution reporters should be part of webpack cli additional context link on in near future i will archived webpack stylish and webpack command repos to avoid misleading | 1 |
127,055 | 5,012,656,351 | IssuesEvent | 2016-12-13 12:05:35 | BinPar/PPD | https://api.github.com/repos/BinPar/PPD | closed | IMPRESIÓN PDF GENERADO: AUMENTAR EL TAMAÑO DE LETRA CUANDO SE IMPRIME | Priority: High v1.4 | Es una petición ya realizada y ahora hecha desde otras filiales.
Solicitan que el PDF que se imprime tenga un tipo de letra más grande ya que resulta de difícil lectura.
@CristianBinpar | 1.0 | IMPRESIÓN PDF GENERADO: AUMENTAR EL TAMAÑO DE LETRA CUANDO SE IMPRIME - Es una petición ya realizada y ahora hecha desde otras filiales.
Solicitan que el PDF que se imprime tenga un tipo de letra más grande ya que resulta de difícil lectura.
@CristianBinpar | priority | impresión pdf generado aumentar el tamaño de letra cuando se imprime es una petición ya realizada y ahora hecha desde otras filiales solicitan que el pdf que se imprime tenga un tipo de letra más grande ya que resulta de difícil lectura cristianbinpar | 1 |
483,547 | 13,926,141,635 | IssuesEvent | 2020-10-21 17:51:26 | AY2021S1-CS2113T-F14-3/tp | https://api.github.com/repos/AY2021S1-CS2113T-F14-3/tp | closed | Addition of bus stop search frequency to enhance user experience | priority.High type.Task | Display suggestions based on the most frequent location the user searches for (using inputs given in /route and /bus feature)
Reset search history command given | 1.0 | Addition of bus stop search frequency to enhance user experience - Display suggestions based on the most frequent location the user searches for (using inputs given in /route and /bus feature)
Reset search history command given | priority | addition of bus stop search frequency to enhance user experience display suggestions based on the most frequent location the user searches for using inputs given in route and bus feature reset search history command given | 1 |
233,744 | 7,704,140,217 | IssuesEvent | 2018-05-21 11:03:22 | cybercongress/cyber-search | https://api.github.com/repos/cybercongress/cyber-search | opened | Refactor contract summary to support chain {customName} logic | Priority: High Status: Available Type: Enhancement | Refactor contract summary to support chain {customName} logic | 1.0 | Refactor contract summary to support chain {customName} logic - Refactor contract summary to support chain {customName} logic | priority | refactor contract summary to support chain customname logic refactor contract summary to support chain customname logic | 1 |
782,797 | 27,507,243,285 | IssuesEvent | 2023-03-06 05:15:38 | AY2223S2-CS2103T-T12-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-T12-2/tp | opened | Remove the medical records of a patient | type.Story priority.High | As a staff member, I want to remove the medical records of a patient so that I can correct the mistakes in medical cases, allergies and medical conditions. | 1.0 | Remove the medical records of a patient - As a staff member, I want to remove the medical records of a patient so that I can correct the mistakes in medical cases, allergies and medical conditions. | priority | remove the medical records of a patient as a staff member i want to remove the medical records of a patient so that i can correct the mistakes in medical cases allergies and medical conditions | 1 |
266,992 | 8,378,095,466 | IssuesEvent | 2018-10-06 10:13:12 | CS2103-AY1819S1-F11-4/main | https://api.github.com/repos/CS2103-AY1819S1-F11-4/main | closed | timetable | feature.Timetable priority.High severity.Medium | to check if timetable input is valid
to do timetable ui for viewing timetable directly with person
| 1.0 | timetable - to check if timetable input is valid
to do timetable ui for viewing timetable directly with person
| priority | timetable to check if timetable input is valid to do timetable ui for viewing timetable directly with person | 1 |
34,756 | 2,787,281,716 | IssuesEvent | 2015-05-08 03:43:52 | punongbayan-araullo/tickets | https://api.github.com/repos/punongbayan-araullo/tickets | opened | Cannot find the file (Otsuka 2013 Audit - voyager) | priority - high status - accepted system - archives type - bug | Cannot find the file (Otsuka 2013 Audit - voyager) | 1.0 | Cannot find the file (Otsuka 2013 Audit - voyager) - Cannot find the file (Otsuka 2013 Audit - voyager) | priority | cannot find the file otsuka audit voyager cannot find the file otsuka audit voyager | 1 |
487,811 | 14,059,943,353 | IssuesEvent | 2020-11-03 04:37:50 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | [IS 5.10] Slowness in Filtering Users with Email Claim Using SCIM 2.0 API | Complexity/Low Component/Kernel Priority/Highest Severity/Critical bug consumer-exp | **Describe the issue:**
There is a slowness observed when filtering users with email claim using SCIM API . This has been tested with a user base which has 200K plus users.
Please find the observations with different indexes:
INDEX | User Creation | User Search
-- | -- | --
UM_USER_ATTRIBUTE(UM_USER_ID, UM_ATTR_NAME, UM_ATTR_VALUE, UM_TENANT_ID); | 6 sec | 9 sec
UM_USER_ATTRIBUTE(UM_USER_ID, UM_ATTR_NAME, UM_ATTR_VALUE, UM_PROFILE_ID, UM_TENANT_ID); | 1 sec | 11 sec
UM_USER_ATTRIBUTE(UM_ATTR_NAME, UM_ATTR_VALUE); | 370ms | 127 ms
**Environment information:**
- Product Version: IS 5.10.0
- Database: MySql
- Userstore: JDBC
---
| 1.0 | [IS 5.10] Slowness in Filtering Users with Email Claim Using SCIM 2.0 API - **Describe the issue:**
There is a slowness observed when filtering users with email claim using SCIM API . This has been tested with a user base which has 200K plus users.
Please find the observations with different indexes:
INDEX | User Creation | User Search
-- | -- | --
UM_USER_ATTRIBUTE(UM_USER_ID, UM_ATTR_NAME, UM_ATTR_VALUE, UM_TENANT_ID); | 6 sec | 9 sec
UM_USER_ATTRIBUTE(UM_USER_ID, UM_ATTR_NAME, UM_ATTR_VALUE, UM_PROFILE_ID, UM_TENANT_ID); | 1 sec | 11 sec
UM_USER_ATTRIBUTE(UM_ATTR_NAME, UM_ATTR_VALUE); | 370ms | 127 ms
**Environment information:**
- Product Version: IS 5.10.0
- Database: MySql
- Userstore: JDBC
---
| priority | slowness in filtering users with email claim using scim api describe the issue there is a slowness observed when filtering users with email claim using scim api this has been tested with a user base which has plus users please find the observations with different indexes index user creation user search um user attribute um user id um attr name um attr value um tenant id sec sec um user attribute um user id um attr name um attr value um profile id um tenant id sec sec um user attribute um attr name um attr value ms environment information product version is database mysql userstore jdbc | 1 |
706,500 | 24,273,907,152 | IssuesEvent | 2022-09-28 12:32:28 | AY2223S1-CS2103T-T12-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T12-2/tp | closed | Add skeletal Project Portfolio Page (PPP) | type.Task priority.High | ## Tasks
- Headings are enough. Can write `to be added soon` as placeholders for content
- PDF conversion not required
- Refer to [this](https://nus-cs2103-ay2223s1.github.io/website/schedule/week7/project.html#4-add-a-skeletal-ppp) for details
### Team checklist
- [x] Jason #12
- [x] Shenyi #20
- [x] Santosh #22
- [x] Yun Ru #23
- [x] Pep #13 | 1.0 | Add skeletal Project Portfolio Page (PPP) - ## Tasks
- Headings are enough. Can write `to be added soon` as placeholders for content
- PDF conversion not required
- Refer to [this](https://nus-cs2103-ay2223s1.github.io/website/schedule/week7/project.html#4-add-a-skeletal-ppp) for details
### Team checklist
- [x] Jason #12
- [x] Shenyi #20
- [x] Santosh #22
- [x] Yun Ru #23
- [x] Pep #13 | priority | add skeletal project portfolio page ppp tasks headings are enough can write to be added soon as placeholders for content pdf conversion not required refer to for details team checklist jason shenyi santosh yun ru pep | 1 |
182,663 | 6,672,343,305 | IssuesEvent | 2017-10-04 11:12:55 | canonical-websites/tutorials.ubuntu.com | https://api.github.com/repos/canonical-websites/tutorials.ubuntu.com | closed | Sort by Least difficult first does not seem to work | Priority: High Status: Triaged Type: Bug | "How to verify your Ubuntu download!"
Comes up first when it's got a 3 star rating
| 1.0 | Sort by Least difficult first does not seem to work - "How to verify your Ubuntu download!"
Comes up first when it's got a 3 star rating
| priority | sort by least difficult first does not seem to work how to verify your ubuntu download comes up first when it s got a star rating | 1 |
560,548 | 16,599,438,686 | IssuesEvent | 2021-06-01 17:15:55 | pw-software-engineering/n-team | https://api.github.com/repos/pw-software-engineering/n-team | closed | [Server] Move connection strings | Priority: high | - [x] Move connection strings for tests to appsettings.json
- [x] Add error property for each 4** response
Time spent: 1h
| 1.0 | [Server] Move connection strings - - [x] Move connection strings for tests to appsettings.json
- [x] Add error property for each 4** response
Time spent: 1h
| priority | move connection strings move connection strings for tests to appsettings json add error property for each response time spent | 1 |
229,999 | 7,603,134,345 | IssuesEvent | 2018-04-29 11:12:22 | TCA-Team/TumCampusApp | https://api.github.com/repos/TCA-Team/TumCampusApp | opened | Make sure FCM token is uploaded | High Priority :fire_engine: Server :house_with_garden: | Right now uploading the RSA public key & FCM token relies on an activated token. This unfortunately leads to the limitation, that the app does not alert (Alarmierung) people who have not activated the app.
The behavior should be changed, so that the rsa public key and FCM token is uploaded on first start, regardless if a token has been activated. | 1.0 | Make sure FCM token is uploaded - Right now uploading the RSA public key & FCM token relies on an activated token. This unfortunately leads to the limitation, that the app does not alert (Alarmierung) people who have not activated the app.
The behavior should be changed, so that the rsa public key and FCM token is uploaded on first start, regardless if a token has been activated. | priority | make sure fcm token is uploaded right now uploading the rsa public key fcm token relies on an activated token this unfortunately leads to the limitation that the app does not alert alarmierung people who have not activated the app the behavior should be changed so that the rsa public key and fcm token is uploaded on first start regardless if a token has been activated | 1 |
247,434 | 7,918,646,968 | IssuesEvent | 2018-07-04 13:58:22 | dojot/dojot | https://api.github.com/repos/dojot/dojot | opened | Boolean attribute: no validation in published data | Priority:High Team:Backend Team:Frontend Type:Bug | The Boolean attribute
- accepts and displays the values "true", true, "false", 1
- accepts but does not display values 0, false

- but also accept "sim", "qq coisa" ...


Expected result:

| 1.0 | Boolean attribute: no validation in published data - The Boolean attribute
- accepts and displays the values "true", true, "false", 1
- accepts but does not display values 0, false

- but also accept "sim", "qq coisa" ...


Expected result:

| priority | boolean attribute no validation in published data the boolean attribute accepts and displays the values true true false accepts but does not display values false but also accept sim qq coisa expected result | 1 |
314,334 | 9,595,590,765 | IssuesEvent | 2019-05-09 16:24:45 | CosminNechifor/IKHNAIE | https://api.github.com/repos/CosminNechifor/IKHNAIE | closed | Implement the component related functions taking into account the state management | High Priority | Parent issue: #11
## State management
The **state** field is gonna be one of the must important fields of a component, because it allows and restricts which actors have access to the components.

## Functions that have to be implemented:
- [x] updateComponentName
- [x] updateComponentExpiration
- [x] updateComponentPrice
- [x] updateOtherInformation
- [x] addToOtherComponent
- [x] removeComponentFromOtherComponent
- [x] flagAsBroken (explained in: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483783517)
- [x] flagAsExpired (explained in: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483847953)
- [x] repair (explained here: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483783517)
## Events the ``ComponentContract`` will emit while performing the functions:
------
- **ComponentCreated** → emitted when we create a component
- **ComponentUpdated** → emitted when we change the value of one field. Musty when the methods above are being called.
- **ComponentSubmitedForSale** → emitted when a component is submitted for sale.
- **ComponentWasBought** → emitted when someone else bought the component (**This is market related so it wont be added to the ``Component`` contract**)
- **ComponentRemovedFromMarket** → emitted when a component was removed by the owner from the market (**Market related so it wont be added to the ``Component`` contract**)
- **OwnershipTransferred** → emitted when a component is bought.
| 1.0 | Implement the component related functions taking into account the state management - Parent issue: #11
## State management
The **state** field is gonna be one of the must important fields of a component, because it allows and restricts which actors have access to the components.

## Functions that have to be implemented:
- [x] updateComponentName
- [x] updateComponentExpiration
- [x] updateComponentPrice
- [x] updateOtherInformation
- [x] addToOtherComponent
- [x] removeComponentFromOtherComponent
- [x] flagAsBroken (explained in: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483783517)
- [x] flagAsExpired (explained in: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483847953)
- [x] repair (explained here: https://github.com/CosminNechifor/IKHNAIE/issues/14#issuecomment-483783517)
## Events the ``ComponentContract`` will emit while performing the functions:
------
- **ComponentCreated** → emitted when we create a component
- **ComponentUpdated** → emitted when we change the value of one field. Musty when the methods above are being called.
- **ComponentSubmitedForSale** → emitted when a component is submitted for sale.
- **ComponentWasBought** → emitted when someone else bought the component (**This is market related so it wont be added to the ``Component`` contract**)
- **ComponentRemovedFromMarket** → emitted when a component was removed by the owner from the market (**Market related so it wont be added to the ``Component`` contract**)
- **OwnershipTransferred** → emitted when a component is bought.
| priority | implement the component related functions taking into account the state management parent issue state management the state field is gonna be one of the must important fields of a component because it allows and restricts which actors have access to the components functions that have to be implemented updatecomponentname updatecomponentexpiration updatecomponentprice updateotherinformation addtoothercomponent removecomponentfromothercomponent flagasbroken explained in flagasexpired explained in repair explained here events the componentcontract will emit while performing the functions componentcreated rarr emitted when we create a component componentupdated rarr emitted when we change the value of one field musty when the methods above are being called componentsubmitedforsale rarr emitted when a component is submitted for sale componentwasbought rarr emitted when someone else bought the component this is market related so it wont be added to the component contract componentremovedfrommarket rarr emitted when a component was removed by the owner from the market market related so it wont be added to the component contract ownershiptransferred rarr emitted when a component is bought | 1 |
237,956 | 7,768,614,004 | IssuesEvent | 2018-06-03 19:56:04 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | Stuck at "Retrieving character list" for 5 minutes | Priority - High |
**Description**:
Irregularly, but often enough, when we launch AzerothCore we are stuck at "retrieving character list" just after login on our account. This concerns all the people logging in (players).
**Current behaviour**:
We are stuck during 2 minutes to 5 minutes (maybe more, sometimes we are stuck for more and it times out - this is not verified). Once we are able to choose our characters and enter the game, it never happens again (see **Remarks** below).
**Expected behaviour**:
This should never happen
**Remarks**:
- This has been noticed by MANY people in the discord, by me personally, by AyaseCore too.
- @AyaseCore told me this didn't occur on SunwellCore. This need to be verified (same for mangos, cmangos, trinitycore etc)
- It happened to me only when I didn't start my server for months. Or switched to another unused for a while database (like an older characters db). **So I think this is related to some MySQL cache or something like that** - need opinions on this.
**Steps to reproduce the problem**:
I don't know, it seems like people here have a similar issue (not sure if it's the same) https://github.com/azerothcore/azerothcore-wotlk/issues/711#issuecomment-362535892
**Branch(es)**: 0.x / 1.x / master (Specify the branch(es) affected by this issue)
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
**AC hash/commit**: <!-- IF YOU DO NOT FILL THIS OUT, WE WILL CLOSE YOUR ISSUE! -->
**Operating system**:
On windows or on linux, same thing
PS: I started writing this issue and noticed the other issue afterwards so I let this one to refresh the problem https://github.com/azerothcore/azerothcore-wotlk/issues/711#issuecomment-362535892
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/58465793-stuck-at-retrieving-character-list-for-5-minutes?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | Stuck at "Retrieving character list" for 5 minutes -
**Description**:
Irregularly, but often enough, when we launch AzerothCore we are stuck at "retrieving character list" just after login on our account. This concerns all the people logging in (players).
**Current behaviour**:
We are stuck during 2 minutes to 5 minutes (maybe more, sometimes we are stuck for more and it times out - this is not verified). Once we are able to choose our characters and enter the game, it never happens again (see **Remarks** below).
**Expected behaviour**:
This should never happen
**Remarks**:
- This has been noticed by MANY people in the discord, by me personally, by AyaseCore too.
- @AyaseCore told me this didn't occur on SunwellCore. This need to be verified (same for mangos, cmangos, trinitycore etc)
- It happened to me only when I didn't start my server for months. Or switched to another unused for a while database (like an older characters db). **So I think this is related to some MySQL cache or something like that** - need opinions on this.
**Steps to reproduce the problem**:
I don't know, it seems like people here have a similar issue (not sure if it's the same) https://github.com/azerothcore/azerothcore-wotlk/issues/711#issuecomment-362535892
**Branch(es)**: 0.x / 1.x / master (Specify the branch(es) affected by this issue)
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
<!-- NEVER WRITE "LATEST", ALWAYS PUT THE ACTUAL VALUE INSTEAD -->
**AC hash/commit**: <!-- IF YOU DO NOT FILL THIS OUT, WE WILL CLOSE YOUR ISSUE! -->
**Operating system**:
On windows or on linux, same thing
PS: I started writing this issue and noticed the other issue afterwards so I let this one to refresh the problem https://github.com/azerothcore/azerothcore-wotlk/issues/711#issuecomment-362535892
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/58465793-stuck-at-retrieving-character-list-for-5-minutes?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin> | priority | stuck at retrieving character list for minutes description irregularly but often enough when we launch azerothcore we are stuck at retrieving character list just after login on our account this concerns all the people logging in players current behaviour we are stuck during minutes to minutes maybe more sometimes we are stuck for more and it times out this is not verified once we are able to choose our characters and enter the game it never happens again see remarks below expected behaviour this should never happen remarks this has been noticed by many people in the discord by me personally by ayasecore too ayasecore told me this didn t occur on sunwellcore this need to be verified same for mangos cmangos trinitycore etc it happened to me only when i didn t start my server for months or switched to another unused for a while database like an older characters db so i think this is related to some mysql cache or something like that need opinions on this steps to reproduce the problem i don t know it seems like people here have a similar issue not sure if it s the same branch es x x master specify the branch es affected by this issue ac hash commit operating system on windows or on linux same thing ps i started writing this issue and noticed the other issue afterwards so i let this one to refresh the problem want to back this issue we accept bounties via | 1 |
224,530 | 7,471,628,732 | IssuesEvent | 2018-04-03 09:52:18 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Thread requirements in RFC2460 | area: Networking enhancement priority: high | **_Reported by Gajinder Vij:_**
The Thread Protocol imposes the following requirements on RFC2460:
RFC2460.3: IPv6 Header Format
RFC2460.4: IPv6 Extension Headers
RFC2460.4.1: Extension Header Order
RFC2460.4.2: Options
RFC2460.4.3: Hop-by-Hop Options Header
RFC2460.4.6: Destination Options Header
RFC2460.4.7: No Next Header
RFC2460.5: Packet Size Issues
Not Required
� Path MTU Discovery is not required.
� Only Border Routers are required to receive packets as large as 1500 bytes.
� IPv6-to-IPv4 tunneling and fragmentation is not required.
RFC2460.8: Upper-Layer Protocol Issues
RFC2460.8.1: Upper-Layer Checksums
RFC2460.8.2: Maximum Packet Lifetime
RFC2460.8.3: Maximum Upper-Layer Payload Size
RFC2460.Appendix B: Formatting Guidelines for Options
(Imported from Jira ZEP-835) | 1.0 | Thread requirements in RFC2460 - **_Reported by Gajinder Vij:_**
The Thread Protocol imposes the following requirements on RFC2460:
RFC2460.3: IPv6 Header Format
RFC2460.4: IPv6 Extension Headers
RFC2460.4.1: Extension Header Order
RFC2460.4.2: Options
RFC2460.4.3: Hop-by-Hop Options Header
RFC2460.4.6: Destination Options Header
RFC2460.4.7: No Next Header
RFC2460.5: Packet Size Issues
Not Required
� Path MTU Discovery is not required.
� Only Border Routers are required to receive packets as large as 1500 bytes.
� IPv6-to-IPv4 tunneling and fragmentation is not required.
RFC2460.8: Upper-Layer Protocol Issues
RFC2460.8.1: Upper-Layer Checksums
RFC2460.8.2: Maximum Packet Lifetime
RFC2460.8.3: Maximum Upper-Layer Payload Size
RFC2460.Appendix B: Formatting Guidelines for Options
(Imported from Jira ZEP-835) | priority | thread requirements in reported by gajinder vij the thread protocol imposes the following requirements on header format extension headers extension header order options hop by hop options header destination options header no next header packet size issues not required � path mtu discovery is not required � only border routers are required to receive packets as large as bytes � to tunneling and fragmentation is not required upper layer protocol issues upper layer checksums maximum packet lifetime maximum upper layer payload size appendix b formatting guidelines for options imported from jira zep | 1 |
375,598 | 11,114,320,407 | IssuesEvent | 2019-12-18 08:23:11 | wso2/identity-apps | https://api.github.com/repos/wso2/identity-apps | closed | Profile claims are not synced with Personal info | Priority/High Type/Bug | Affected version: is_5 10 m9, -Identity -app version - 0.1.152-SNAPSHOT
**Description**
Default user Profile (management console) claims are not synced with Personal info-> user profile claims.
**
management console, by using wso2 http://wso2.org/claims, but in user profile get claims through scim dialect, so this may be the cause of this issue.
| 1.0 | Profile claims are not synced with Personal info - Affected version: is_5 10 m9, -Identity -app version - 0.1.152-SNAPSHOT
**Description**
Default user Profile (management console) claims are not synced with Personal info-> user profile claims.
**
management console, by using wso2 http://wso2.org/claims, but in user profile get claims through scim dialect, so this may be the cause of this issue.
| priority | profile claims are not synced with personal info affected version is identity app version snapshot description default user profile management console claims are not synced with personal info user profile claims management console by using but in user profile get claims through scim dialect so this may be the cause of this issue | 1 |
617,569 | 19,394,423,578 | IssuesEvent | 2021-12-18 03:42:32 | devsapp/fc | https://api.github.com/repos/devsapp/fc | closed | 本地调试类功能生成函数容器限制项时缺乏错误兜底 | high priority | ### 问题描述
当使用本地调试类功能时,会调用 fc-common 组件中的 genContainerResourcesLimitConfig 方法生成本地容器资源限制配置,例如内存、cpu 限制项等,但是当 genContainerResourcesLimitConfig 方法出错时,会 block 整个进程,此时预期行为应该是抛出 warning 告诉用户生成限制项失败,不应该 block 整个进行。 | 1.0 | 本地调试类功能生成函数容器限制项时缺乏错误兜底 - ### 问题描述
当使用本地调试类功能时,会调用 fc-common 组件中的 genContainerResourcesLimitConfig 方法生成本地容器资源限制配置,例如内存、cpu 限制项等,但是当 genContainerResourcesLimitConfig 方法出错时,会 block 整个进程,此时预期行为应该是抛出 warning 告诉用户生成限制项失败,不应该 block 整个进行。 | priority | 本地调试类功能生成函数容器限制项时缺乏错误兜底 问题描述 当使用本地调试类功能时,会调用 fc common 组件中的 gencontainerresourceslimitconfig 方法生成本地容器资源限制配置,例如内存、cpu 限制项等,但是当 gencontainerresourceslimitconfig 方法出错时,会 block 整个进程,此时预期行为应该是抛出 warning 告诉用户生成限制项失败,不应该 block 整个进行。 | 1 |
124,486 | 4,923,414,573 | IssuesEvent | 2016-11-25 10:15:56 | robotology/ycm | https://api.github.com/repos/robotology/ycm | closed | master branch and tag v0.2.1 / v0.2.0 of ycm not compiling | Component: 3rd Party Platform: Linux Platform: macOS Priority: High Severity: Major Status: in progress Type: Bug | Tested on macOS / Ubuntu .
~~~
Scanning dependencies of target 3rdparty-ovito
[ 65%] Downloading file cmake/FindQCustomPlot.cmake from OVITO (The Open Visualization Tool) git repository (ref 8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c)
-- Cannot download file https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"].
Retrying.
-- Cannot download file https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"].
Retrying.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake.cmake:30 (message):
Cannot download file
https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"]
make[2]: *** [3rdparty/ovito/cmake/FindQCustomPlot.cmake] Error 1
make[1]: *** [3rdparty/CMakeFiles/3rdparty-ovito.dir/all] Error 2
~~~
To anyone having the same problem: use the offline version of ycm available at https://github.com/robotology/ycm/releases . | 1.0 | master branch and tag v0.2.1 / v0.2.0 of ycm not compiling - Tested on macOS / Ubuntu .
~~~
Scanning dependencies of target 3rdparty-ovito
[ 65%] Downloading file cmake/FindQCustomPlot.cmake from OVITO (The Open Visualization Tool) git repository (ref 8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c)
-- Cannot download file https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"].
Retrying.
-- Cannot download file https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"].
Retrying.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake.cmake:30 (message):
Cannot download file
https://sourceforge.net/p/ovito/git/ci/8689fcb1fdd2e8dc748e76d54d3b77a3f87d384c/tree/cmake/FindQCustomPlot.cmake?format=raw
Network problem or not existing file.
CMake Error at /Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/ycm_download_cmake_FindQCustomPlot_cmake_real.cmake:9 (file):
file DOWNLOAD HASH mismatch
for file: [/Users/traversaro/src/ycm/build/3rdparty/CMakeFiles/3rdparty-ovito.dir/downloads/cmake/FindQCustomPlot.cmake]
expected hash: [a59dd4d955a5e775270a4f2656a039ae490e03ed]
actual hash: [da39a3ee5e6b4b0d3255bfef95601890afd80709]
status: [22;"HTTP response code said error"]
make[2]: *** [3rdparty/ovito/cmake/FindQCustomPlot.cmake] Error 1
make[1]: *** [3rdparty/CMakeFiles/3rdparty-ovito.dir/all] Error 2
~~~
To anyone having the same problem: use the offline version of ycm available at https://github.com/robotology/ycm/releases . | priority | master branch and tag of ycm not compiling tested on macos ubuntu scanning dependencies of target ovito downloading file cmake findqcustomplot cmake from ovito the open visualization tool git repository ref cannot download file network problem or not existing file cmake error at users traversaro src ycm build cmakefiles ovito dir ycm download cmake findqcustomplot cmake real cmake file file download hash mismatch for file expected hash actual hash status retrying cannot download file network problem or not existing file cmake error at users traversaro src ycm build cmakefiles ovito dir ycm download cmake findqcustomplot cmake real cmake file file download hash mismatch for file expected hash actual hash status retrying cmake error at users traversaro src ycm build cmakefiles ovito dir ycm download cmake findqcustomplot cmake cmake message cannot download file network problem or not existing file cmake error at users traversaro src ycm build cmakefiles ovito dir ycm download cmake findqcustomplot cmake real cmake file file download hash mismatch for file expected hash actual hash status make error make error to anyone having the same problem use the offline version of ycm available at | 1 |
261,325 | 8,229,737,808 | IssuesEvent | 2018-09-07 10:21:09 | buttercup/buttercup-browser-extension | https://api.github.com/repos/buttercup/buttercup-browser-extension | closed | Background refresh of Archive | Priority: High Status: Blocked Type: Enhancement | It seems the Browser extension for Chrome does not automatically refresh the Archive. If I change something through the Desktop App or Mobile it does never appear in the Browsers suggestions.
I have to manually lock the Archive and reopen it for new entries to appear. | 1.0 | Background refresh of Archive - It seems the Browser extension for Chrome does not automatically refresh the Archive. If I change something through the Desktop App or Mobile it does never appear in the Browsers suggestions.
I have to manually lock the Archive and reopen it for new entries to appear. | priority | background refresh of archive it seems the browser extension for chrome does not automatically refresh the archive if i change something through the desktop app or mobile it does never appear in the browsers suggestions i have to manually lock the archive and reopen it for new entries to appear | 1 |
86,551 | 3,727,021,152 | IssuesEvent | 2016-03-06 00:49:49 | sequelpro/sequelpro | https://api.github.com/repos/sequelpro/sequelpro | opened | Block use of LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN environment variable | Priority-High Security | On launch, we should check if `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` is set and throw an `NSException`.
See https://github.com/sequelpro/sequelpro/issues/2233 for more details.
Perhaps this environment variable can be used to access passwords stored in keychain.
| 1.0 | Block use of LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN environment variable - On launch, we should check if `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` is set and throw an `NSException`.
See https://github.com/sequelpro/sequelpro/issues/2233 for more details.
Perhaps this environment variable can be used to access passwords stored in keychain.
| priority | block use of libmysql enable cleartext plugin environment variable on launch we should check if libmysql enable cleartext plugin is set and throw an nsexception see for more details perhaps this environment variable can be used to access passwords stored in keychain | 1 |
443,953 | 12,804,121,401 | IssuesEvent | 2020-07-03 03:20:17 | zulip/zulip-terminal | https://api.github.com/repos/zulip/zulip-terminal | closed | key error in `helper.py` on start | bug high priority | On starting `zulip-term` I get the following error
```
ERROR:root:225542
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/zulipterminal/cli/run.py", line 296, in main
Controller(zuliprc_path,
File "/usr/lib/python3.8/site-packages/zulipterminal/core.py", line 43, in __init__
self.model = Model(self)
File "/usr/lib/python3.8/site-packages/zulipterminal/model.py", line 115, in __init__
self.unread_counts = classify_unread_counts(self)
File "/usr/lib/python3.8/site-packages/zulipterminal/helper.py", line 414, in classify_unread_counts
if [model.stream_dict[stream_id]['name'],
KeyError: 225542
```
As you can see, this is with Python 3.8. This is on manjaro with zulip-term 0.5.1 installed via `pip`. | 1.0 | key error in `helper.py` on start - On starting `zulip-term` I get the following error
```
ERROR:root:225542
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/zulipterminal/cli/run.py", line 296, in main
Controller(zuliprc_path,
File "/usr/lib/python3.8/site-packages/zulipterminal/core.py", line 43, in __init__
self.model = Model(self)
File "/usr/lib/python3.8/site-packages/zulipterminal/model.py", line 115, in __init__
self.unread_counts = classify_unread_counts(self)
File "/usr/lib/python3.8/site-packages/zulipterminal/helper.py", line 414, in classify_unread_counts
if [model.stream_dict[stream_id]['name'],
KeyError: 225542
```
As you can see, this is with Python 3.8. This is on manjaro with zulip-term 0.5.1 installed via `pip`. | priority | key error in helper py on start on starting zulip term i get the following error error root traceback most recent call last file usr lib site packages zulipterminal cli run py line in main controller zuliprc path file usr lib site packages zulipterminal core py line in init self model model self file usr lib site packages zulipterminal model py line in init self unread counts classify unread counts self file usr lib site packages zulipterminal helper py line in classify unread counts if keyerror as you can see this is with python this is on manjaro with zulip term installed via pip | 1 |
62,207 | 3,176,210,270 | IssuesEvent | 2015-09-24 07:30:16 | aodn/compliance-checker | https://api.github.com/repos/aodn/compliance-checker | closed | IMOS quality_control_convention for quality_control_set = 4 not recognized | bug priority_HIGH | File containing the quality_control_set 4 "WOCE quality control procedure (Multidisciplinary Underway Network – CO 2 measurements)" fail to pass the check although the string is written in the exact same manner.
Note that the space in 'CO2' should be removed. | 1.0 | IMOS quality_control_convention for quality_control_set = 4 not recognized - File containing the quality_control_set 4 "WOCE quality control procedure (Multidisciplinary Underway Network – CO 2 measurements)" fail to pass the check although the string is written in the exact same manner.
Note that the space in 'CO2' should be removed. | priority | imos quality control convention for quality control set not recognized file containing the quality control set woce quality control procedure multidisciplinary underway network – co measurements fail to pass the check although the string is written in the exact same manner note that the space in should be removed | 1 |
285,955 | 8,781,471,400 | IssuesEvent | 2018-12-19 20:35:42 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Trading Page: Module Tabs | Feature Priority: High | When #718 is done:
Design Prototype: https://www.figma.com/file/bZh7tOiLm6Ls9Hl4RL6qMC5X/Augur-Redesign?node-id=342%3A34455
This is referring to the tabs within individual modules, like Price History, Candlestick, and Market Depth shown here:

(Details in the screenshot above may differ from the design. Use the design as final guidance.)
Add this to the trading page in a page-wide, reusable way.
- [ ] Using the chart module referenced above, add tab styles, including hover and selected state.
- [x] Add the functionality to switch tabs and corresponding content. Switching tabs should of course be localized enough that switching tabs in one module doesn't switch it in another. | 1.0 | Trading Page: Module Tabs - When #718 is done:
Design Prototype: https://www.figma.com/file/bZh7tOiLm6Ls9Hl4RL6qMC5X/Augur-Redesign?node-id=342%3A34455
This is referring to the tabs within individual modules, like Price History, Candlestick, and Market Depth shown here:

(Details in the screenshot above may differ from the design. Use the design as final guidance.)
Add this to the trading page in a page-wide, reusable way.
- [ ] Using the chart module referenced above, add tab styles, including hover and selected state.
- [x] Add the functionality to switch tabs and corresponding content. Switching tabs should of course be localized enough that switching tabs in one module doesn't switch it in another. | priority | trading page module tabs when is done design prototype this is referring to the tabs within individual modules like price history candlestick and market depth shown here details in the screenshot above may differ from the design use the design as final guidance add this to the trading page in a page wide reusable way using the chart module referenced above add tab styles including hover and selected state add the functionality to switch tabs and corresponding content switching tabs should of course be localized enough that switching tabs in one module doesn t switch it in another | 1 |
661,068 | 22,039,835,557 | IssuesEvent | 2022-05-29 07:04:38 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | All users does not work offline | 👹Bug Priority: high | **Bug description:**
Field agent does not work offline after you refresh it.
**Steps to reproduce:**
1. Log in as an Field Agent
2. Turn off your mobile data to go offline
3. Refresh the page.
**Actual Result:**
You see a white screen and then the app is unusable
**Expected Result:**
Should work even after refresh
**Screenshot:**

**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Device:** mobile | 1.0 | All users does not work offline - **Bug description:**
Field agent does not work offline after you refresh it.
**Steps to reproduce:**
1. Log in as an Field Agent
2. Turn off your mobile data to go offline
3. Refresh the page.
**Actual Result:**
You see a white screen and then the app is unusable
**Expected Result:**
Should work even after refresh
**Screenshot:**

**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Device:** mobile | priority | all users does not work offline bug description field agent does not work offline after you refresh it steps to reproduce log in as an field agent turn off your mobile data to go offline refresh the page actual result you see a white screen and then the app is unusable expected result should work even after refresh screenshot tested on device mobile | 1 |
478,414 | 13,779,149,208 | IssuesEvent | 2020-10-08 13:25:02 | abpframework/abp | https://api.github.com/repos/abpframework/abp | closed | How to override ABP API Controller (AccountController) | effort-2 in-progress priority:high problem | I am trying to override the `SendPasswordResetCodeAsync` and `ResetPasswordAsync` methods of the `AccountController` but I am having issues. They both are `virtual` methods.
I followed the [Overriding Services](https://docs.abp.io/en/abp/latest/Customizing-Application-Modules-Overriding-Services#example-overriding-an-application-service) guide and created the controller as follows:
```csharp
namespace MyProject.Controllers
{
[Dependency(ReplaceServices = true)]
[Area("account")]
[RemoteService(IsEnabled = false, IsMetadataEnabled = false, Name = AccountRemoteServiceConsts.RemoteServiceName)]
[Route("api/account")]
public class AccountController : Volo.Abp.Account.AccountController, IAccountAppService
{
public AccountController(IAccountAppService accountAppService)
: base(accountAppService) { }
[HttpPost]
[Route("send-password-reset-code")]
public override Task SendPasswordResetCodeAsync(SendPasswordResetCodeDto input)
{
// my logic...
return AccountAppService.SendPasswordResetCodeAsync(input);
}
[HttpPost]
[Route("reset-password")]
public override Task ResetPasswordAsync(ResetPasswordDto input)
{
// my logic...
return AccountAppService.ResetPasswordAsync(input);
}
}
}
```
But I get the following exception when using the endpoints:
> Microsoft.AspNetCore.Routing.Matching.AmbiguousMatchException: The request matched multiple endpoints. Matches:
> Volo.Abp.Account.AccountController.SendPasswordResetCodeAsync (Volo.Abp.Account.HttpApi)
> MyProject.Controllers.AccountController.SendPasswordResetCodeAsync (MyProject.HttpApi.Host)
So I scour the existing abp issues and come across this from a long time ago: https://github.com/abpframework/abp/issues/2428#issuecomment-567778737 so add a custom routing convention i.e.
```csharp
context.Services.Configure<MvcOptions>(options =>
{
options.Conventions.Add(new MyRoutingConvention());
});
```
But I am not happy with this approach because it removes all the original `/account` routes so then I have to use: `[RemoteService(true)]`, when I only want to override a few methods of the existing controller.
Is there a better approach now in ABP 3.1.0 and could you add a section to the 'Overriding Services' guide to explain how to override API Controllers specifically? | 1.0 | How to override ABP API Controller (AccountController) - I am trying to override the `SendPasswordResetCodeAsync` and `ResetPasswordAsync` methods of the `AccountController` but I am having issues. They both are `virtual` methods.
I followed the [Overriding Services](https://docs.abp.io/en/abp/latest/Customizing-Application-Modules-Overriding-Services#example-overriding-an-application-service) guide and created the controller as follows:
```csharp
namespace MyProject.Controllers
{
[Dependency(ReplaceServices = true)]
[Area("account")]
[RemoteService(IsEnabled = false, IsMetadataEnabled = false, Name = AccountRemoteServiceConsts.RemoteServiceName)]
[Route("api/account")]
public class AccountController : Volo.Abp.Account.AccountController, IAccountAppService
{
public AccountController(IAccountAppService accountAppService)
: base(accountAppService) { }
[HttpPost]
[Route("send-password-reset-code")]
public override Task SendPasswordResetCodeAsync(SendPasswordResetCodeDto input)
{
// my logic...
return AccountAppService.SendPasswordResetCodeAsync(input);
}
[HttpPost]
[Route("reset-password")]
public override Task ResetPasswordAsync(ResetPasswordDto input)
{
// my logic...
return AccountAppService.ResetPasswordAsync(input);
}
}
}
```
But I get the following exception when using the endpoints:
> Microsoft.AspNetCore.Routing.Matching.AmbiguousMatchException: The request matched multiple endpoints. Matches:
> Volo.Abp.Account.AccountController.SendPasswordResetCodeAsync (Volo.Abp.Account.HttpApi)
> MyProject.Controllers.AccountController.SendPasswordResetCodeAsync (MyProject.HttpApi.Host)
So I scour the existing abp issues and come across this from a long time ago: https://github.com/abpframework/abp/issues/2428#issuecomment-567778737 so add a custom routing convention i.e.
```csharp
context.Services.Configure<MvcOptions>(options =>
{
options.Conventions.Add(new MyRoutingConvention());
});
```
But I am not happy with this approach because it removes all the original `/account` routes so then I have to use: `[RemoteService(true)]`, when I only want to override a few methods of the existing controller.
Is there a better approach now in ABP 3.1.0 and could you add a section to the 'Overriding Services' guide to explain how to override API Controllers specifically? | priority | how to override abp api controller accountcontroller i am trying to override the sendpasswordresetcodeasync and resetpasswordasync methods of the accountcontroller but i am having issues they both are virtual methods i followed the guide and created the controller as follows csharp namespace myproject controllers public class accountcontroller volo abp account accountcontroller iaccountappservice public accountcontroller iaccountappservice accountappservice base accountappservice public override task sendpasswordresetcodeasync sendpasswordresetcodedto input my logic return accountappservice sendpasswordresetcodeasync input public override task resetpasswordasync resetpassworddto input my logic return accountappservice resetpasswordasync input but i get the following exception when using the endpoints microsoft aspnetcore routing matching ambiguousmatchexception the request matched multiple endpoints matches volo abp account accountcontroller sendpasswordresetcodeasync volo abp account httpapi myproject controllers accountcontroller sendpasswordresetcodeasync myproject httpapi host so i scour the existing abp issues and come across this from a long time ago so add a custom routing convention i e csharp context services configure options options conventions add new myroutingconvention but i am not happy with this approach because it removes all the original account routes so then i have to use when i only want to override a few methods of the existing controller is there a better approach now in abp and could you add a section to the overriding services guide to explain how to override api controllers specifically | 1 |
816,806 | 30,613,102,289 | IssuesEvent | 2023-07-23 21:22:33 | Memmy-App/memmy | https://api.github.com/repos/Memmy-App/memmy | closed | Instance blocking capability | enhancement high priority | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I would like to request the ability to add an instance to a blocklist. This is because I like to keep a safe for work feed, turning on the NSFW hide toggle does get rid of a majority of the posts but some still get through because they are not marked as NSFW.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I would like the ability to block a instance, this could be done by having a section in settings where instances can be typed in.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
A alternative solution would be to keep a list of NSFW instances and local federation an opt-in for the user. This means any user that wants these instances to appear in their feed can still have it.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Instance blocking capability - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I would like to request the ability to add an instance to a blocklist. This is because I like to keep a safe for work feed, turning on the NSFW hide toggle does get rid of a majority of the posts but some still get through because they are not marked as NSFW.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I would like the ability to block a instance, this could be done by having a section in settings where instances can be typed in.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
A alternative solution would be to keep a list of NSFW instances and local federation an opt-in for the user. This means any user that wants these instances to appear in their feed can still have it.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | instance blocking capability is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when i would like to request the ability to add an instance to a blocklist this is because i like to keep a safe for work feed turning on the nsfw hide toggle does get rid of a majority of the posts but some still get through because they are not marked as nsfw describe the solution you d like a clear and concise description of what you want to happen i would like the ability to block a instance this could be done by having a section in settings where instances can be typed in describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered a alternative solution would be to keep a list of nsfw instances and local federation an opt in for the user this means any user that wants these instances to appear in their feed can still have it additional context add any other context or screenshots about the feature request here | 1 |
656,489 | 21,765,840,132 | IssuesEvent | 2022-05-13 01:41:37 | djpr-data/djprshiny | https://api.github.com/repos/djpr-data/djprshiny | closed | djpr_plot_server recaches with each plot resize | bug high priority | It looks like djpr_plot_server is caching every single possible width configuration for a given plot, meaning the cache will not work for devices with a screen size different to the device that the cache was developed on. | 1.0 | djpr_plot_server recaches with each plot resize - It looks like djpr_plot_server is caching every single possible width configuration for a given plot, meaning the cache will not work for devices with a screen size different to the device that the cache was developed on. | priority | djpr plot server recaches with each plot resize it looks like djpr plot server is caching every single possible width configuration for a given plot meaning the cache will not work for devices with a screen size different to the device that the cache was developed on | 1 |
778,560 | 27,320,474,240 | IssuesEvent | 2023-02-24 19:20:22 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Khand - Remove succession shattering tradition decision | priority: high scripting | Decision to remove / replace cultural tradition of realm shattering given a period of stably holding realm together. | 1.0 | Khand - Remove succession shattering tradition decision - Decision to remove / replace cultural tradition of realm shattering given a period of stably holding realm together. | priority | khand remove succession shattering tradition decision decision to remove replace cultural tradition of realm shattering given a period of stably holding realm together | 1 |
445,757 | 12,835,725,708 | IssuesEvent | 2020-07-07 13:20:18 | wu-lang/wu | https://api.github.com/repos/wu-lang/wu | closed | [bug] extern module generates an invalid variable stub | bug good first issue high priority | ```
love: extern module {}
```
generates
```lua
local love =
return { love=love }
```
Expected result: no code is generated

| 1.0 | [bug] extern module generates an invalid variable stub - ```
love: extern module {}
```
generates
```lua
local love =
return { love=love }
```
Expected result: no code is generated

| priority | extern module generates an invalid variable stub love extern module generates lua local love return love love expected result no code is generated | 1 |
206,576 | 7,113,713,051 | IssuesEvent | 2018-01-17 21:28:52 | SacredDuckwhale/Rarity | https://api.github.com/repos/SacredDuckwhale/Rarity | opened | Localized phrases are outdated (and can't be updated) | module:localization priority:high status:accepted status:waiting type:bug | When trying to import new phrases, a Server Error is given instead. I already messaged the Curse Support, so far without much success.
This one is on Curse/WowAce, most likely. They should have created a ticket with the developers, but no one knows if and when they will fix the errors.
It may also be a permission issue, in which case they should still report back to let me know. | 1.0 | Localized phrases are outdated (and can't be updated) - When trying to import new phrases, a Server Error is given instead. I already messaged the Curse Support, so far without much success.
This one is on Curse/WowAce, most likely. They should have created a ticket with the developers, but no one knows if and when they will fix the errors.
It may also be a permission issue, in which case they should still report back to let me know. | priority | localized phrases are outdated and can t be updated when trying to import new phrases a server error is given instead i already messaged the curse support so far without much success this one is on curse wowace most likely they should have created a ticket with the developers but no one knows if and when they will fix the errors it may also be a permission issue in which case they should still report back to let me know | 1 |
271,185 | 8,476,942,100 | IssuesEvent | 2018-10-25 00:16:31 | FSU-ACM/Contest-Server | https://api.github.com/repos/FSU-ACM/Contest-Server | opened | Move solo registration link | enhancement high priority | Don't show solo registration as a first-choice option, make it a link at the top of team registration. | 1.0 | Move solo registration link - Don't show solo registration as a first-choice option, make it a link at the top of team registration. | priority | move solo registration link don t show solo registration as a first choice option make it a link at the top of team registration | 1 |
478,266 | 13,776,235,873 | IssuesEvent | 2020-10-08 09:10:39 | alphagov/govuk-prototype-kit | https://api.github.com/repos/alphagov/govuk-prototype-kit | closed | Snyk / npm install / audit alerts | ⚠️ high priority 🕔 hours | Hi
I've not looked into exactly how it is used in the kit, but snyk is flagging the "marked" dependency as follows, any chance of a major version upgrade?
```
Upgrade marked@0.8.2 to marked@1.1.1 to fix
✗ Regular Expression Denial of Service (ReDoS ) [Medium Severity][https://snyk.io/vuln/SNYK-JS-MARKED-584281] in marked@0.8.2
```
Also are there any plans to swap out node-sass in favour of the javascript only sass? https://www.npmjs.com/package/sass
While this is just the prototype kit, and we'd catch any issues in any real apps in CI, I'm trying to train our team to always be mindful of npm / snyk warnings rather than just learning the behaviour of ignoring them! (i.e. read that line about request being deprecated and don't spend time implementing it only to have to rip it out later!)
many thanks | 1.0 | Snyk / npm install / audit alerts - Hi
I've not looked into exactly how it is used in the kit, but snyk is flagging the "marked" dependency as follows, any chance of a major version upgrade?
```
Upgrade marked@0.8.2 to marked@1.1.1 to fix
✗ Regular Expression Denial of Service (ReDoS ) [Medium Severity][https://snyk.io/vuln/SNYK-JS-MARKED-584281] in marked@0.8.2
```
Also are there any plans to swap out node-sass in favour of the javascript only sass? https://www.npmjs.com/package/sass
While this is just the prototype kit, and we'd catch any issues in any real apps in CI, I'm trying to train our team to always be mindful of npm / snyk warnings rather than just learning the behaviour of ignoring them! (i.e. read that line about request being deprecated and don't spend time implementing it only to have to rip it out later!)
many thanks | priority | snyk npm install audit alerts hi i ve not looked into exactly how it is used in the kit but snyk is flagging the marked dependency as follows any chance of a major version upgrade upgrade marked to marked to fix ✗ regular expression denial of service redos in marked also are there any plans to swap out node sass in favour of the javascript only sass while this is just the prototype kit and we d catch any issues in any real apps in ci i m trying to train our team to always be mindful of npm snyk warnings rather than just learning the behaviour of ignoring them i e read that line about request being deprecated and don t spend time implementing it only to have to rip it out later many thanks | 1 |
245,743 | 7,890,453,134 | IssuesEvent | 2018-06-28 08:52:34 | borela/naomi | https://api.github.com/repos/borela/naomi | closed | Syntax highlighting: keyword "set" | bug priority: high | Hello, it's again me :)
```js
const traitsMap =
{
types:
{
table,
image,
video,
plot,
tableCell: table,
tableRow: table,
model,
set,
setItem,
},
exp:
{
input,
math,
choice,
longMath,
dragnest: dnd,
dragitem: dnd,
},
};
```

The same situation with "get" keyword. | 1.0 | Syntax highlighting: keyword "set" - Hello, it's again me :)
```js
const traitsMap =
{
types:
{
table,
image,
video,
plot,
tableCell: table,
tableRow: table,
model,
set,
setItem,
},
exp:
{
input,
math,
choice,
longMath,
dragnest: dnd,
dragitem: dnd,
},
};
```

The same situation with "get" keyword. | priority | syntax highlighting keyword set hello it s again me js const traitsmap types table image video plot tablecell table tablerow table model set setitem exp input math choice longmath dragnest dnd dragitem dnd the same situation with get keyword | 1 |
493,706 | 14,236,985,518 | IssuesEvent | 2020-11-18 16:39:08 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | CRAFTERCMS-2068: add edit template icon next to pencils for edit component (COMPONENT ATTRIBUTE PENCILS) | enhancement priority: high | Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: The goal is to make it easier for users to get to component templates.
In incontext editing mode, if a user has write permissions to a template for the given component, a edit template icon should appear next to the pencil.:
----------
Original JIRA Comments:
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-2068
---------- | 1.0 | CRAFTERCMS-2068: add edit template icon next to pencils for edit component (COMPONENT ATTRIBUTE PENCILS) - Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: The goal is to make it easier for users to get to component templates.
In incontext editing mode, if a user has write permissions to a template for the given component, a edit template icon should appear next to the pencil.:
----------
Original JIRA Comments:
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-2068
---------- | priority | craftercms add edit template icon next to pencils for edit component component attribute pencils original jira fix versions original jira components studio original jira description the goal is to make it easier for users to get to component templates in incontext editing mode if a user has write permissions to a template for the given component a edit template icon should appear next to the pencil original jira comments original jira | 1 |
566,518 | 16,823,308,691 | IssuesEvent | 2021-06-17 15:22:21 | 389ds/389-ds-base | https://api.github.com/repos/389ds/389-ds-base | closed | CLI should support Temporary Password Rules attributes | CLI In JIRA priority_high | **Issue Description**
RFE [Temporary Password Rules](https://www.port389.org/docs/389ds/design/otp-password-policy.html
) supports new password policies attributes passwordTPRMaxUse, passwordTPRValidFrom, passwordTPRExpireAt.
The CLI should allow to set those values
**Package Version and Platform:**
1.4.3 and after
**Steps to Reproduce**
At the moment only ldapmodify allows to set those attributes
**Expected results**
dsconf should get/set them
| 1.0 | CLI should support Temporary Password Rules attributes - **Issue Description**
RFE [Temporary Password Rules](https://www.port389.org/docs/389ds/design/otp-password-policy.html
) supports new password policies attributes passwordTPRMaxUse, passwordTPRValidFrom, passwordTPRExpireAt.
The CLI should allow to set those values
**Package Version and Platform:**
1.4.3 and after
**Steps to Reproduce**
At the moment only ldapmodify allows to set those attributes
**Expected results**
dsconf should get/set them
| priority | cli should support temporary password rules attributes issue description rfe supports new password policies attributes passwordtprmaxuse passwordtprvalidfrom passwordtprexpireat the cli should allow to set those values package version and platform and after steps to reproduce at the moment only ldapmodify allows to set those attributes expected results dsconf should get set them | 1 |
588,387 | 17,659,312,487 | IssuesEvent | 2021-08-21 06:44:02 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Python3 ReqMgr2 fails to set pickledarguments during assignment | BUG High Priority Python3 ReqMgr2 py3 potential bug | **Impact of the bug**
ReqMgr2
**Describe the bug**
Note that this seem to be a rare case, since I managed to inject all the DMWM/Integration tests and a few old requests already. However, when I try to assign this workflow:
amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752
it fails with a `400 Bad Request`. The logs extracted from ReqMgr2 testbed POD can be seen in [1]
**How to reproduce it**
Not sure, there must be something very peculiar about this one workflow(!)
**Expected behavior**
Python3 pickle requires data to be a bytes-like object, instead of python3 str. So we should encode the data we pass over to that pickle.loads call.
**Additional context and error message**
[1]
```
[30/Jun/2021:09:12:11] Updating request "amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752" with these user-provided args: {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 30
0, 'SiteWhitelist': ['T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1', 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'Site
Blacklist': [], 'Override': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/logs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned'}
Getting from Cache due to: CouchNotFoundError - reason: Object Not Found, data: {} result: None
[30/Jun/2021:09:12:11] Assign request amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752, input args: {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 300, 'SiteWhitelist': ['
T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1', 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'SiteBlacklist': [], 'Overr
ide': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/logs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned', 'HardTimeout': 129900} ...
[30/Jun/2021:09:12:11] Error for request args {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 300, 'SiteWhitelist': ['T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1'
, 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'SiteBlacklist': [], 'Override': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/l
ogs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned', 'HardTimeout': 129900, 'EventStreams': 0, 'CustodialSites': [], 'NonCustodialSites': [], 'AutoApproveSubscriptionSites': [], 'CustodialSubType': 'Re
plica', 'NonCustodialSubType': 'Replica', 'CustodialGroup': 'DataOps', 'NonCustodialGroup': 'DataOps', 'SubscriptionPriority': 'Low', 'DeleteFromSource': False, 'MinMergeSize': 2147483648, 'MaxMergeSize': 4294967296, 'MaxWaitTime': 86400,
'MaxMergeEvents': 100000000, 'AllowOpportunistic': False, 'BlockCloseMaxWaitTime': 66400, 'BlockCloseMaxFiles': 500, 'BlockCloseMaxEvents': 25000000, 'BlockCloseMaxSize': 5000000000000, 'ChainParentageMap': {}}: Traceback (most recent ca
ll last):
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/ReqMgr/Service/Request.py", line 436, in _handleAssignmentStateTransition
workload.updateArguments(request_args)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 1969, in updateArguments
self.setLFNBase(kwargs["MergedLFNBase"], kwargs["UnmergedLFNBase"])
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 1154, in setLFNBase
self.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 806, in updateLFNsAndDatasets
task.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1783, in updateLFNsAndDatasets
task.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1774, in updateLFNsAndDatasets
self.updateDatasetName(datasetName)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1801, in updateDatasetName
cmsswHelper.setDatasetName(datasetName)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/Steps/Templates/CMSSW.py", line 241, in setDatasetName
args = pickle.loads(self.data.application.configuration.pickledarguments)
TypeError: a bytes-like object is required, not 'str'
``` | 1.0 | Python3 ReqMgr2 fails to set pickledarguments during assignment - **Impact of the bug**
ReqMgr2
**Describe the bug**
Note that this seem to be a rare case, since I managed to inject all the DMWM/Integration tests and a few old requests already. However, when I try to assign this workflow:
amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752
it fails with a `400 Bad Request`. The logs extracted from ReqMgr2 testbed POD can be seen in [1]
**How to reproduce it**
Not sure, there must be something very peculiar about this one workflow(!)
**Expected behavior**
Python3 pickle requires data to be a bytes-like object, instead of python3 str. So we should encode the data we pass over to that pickle.loads call.
**Additional context and error message**
[1]
```
[30/Jun/2021:09:12:11] Updating request "amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752" with these user-provided args: {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 30
0, 'SiteWhitelist': ['T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1', 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'Site
Blacklist': [], 'Override': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/logs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned'}
Getting from Cache due to: CouchNotFoundError - reason: Object Not Found, data: {} result: None
[30/Jun/2021:09:12:11] Assign request amaltaro_SC_ReDigi_Harvest_Agent136_Val_200814_105846_7752, input args: {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 300, 'SiteWhitelist': ['
T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1', 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'SiteBlacklist': [], 'Overr
ide': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/logs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned', 'HardTimeout': 129900} ...
[30/Jun/2021:09:12:11] Error for request args {'SoftTimeout': 129600, 'UnmergedLFNBase': '/store/unmerged', 'TrustPUSitelists': True, 'GracePeriod': 300, 'SiteWhitelist': ['T1_US_FNAL', 'T2_CH_CERN'], 'MergedLFNBase': '/store/backfill/1'
, 'Team': 'testbed-vocms0193', 'ProcessingVersion': 21, 'TrustSitelists': False, 'ProcessingString': 'HG2107_Val_OLD_Alanv3', 'Dashboard': 'test', 'SiteBlacklist': [], 'Override': {'eos-lfn-prefix': 'root://eoscms.cern.ch//eos/cms/store/l
ogs/prod/recent/TESTBED'}, 'AcquisitionEra': 'CMSSW_10_6_1', 'RequestStatus': 'assigned', 'HardTimeout': 129900, 'EventStreams': 0, 'CustodialSites': [], 'NonCustodialSites': [], 'AutoApproveSubscriptionSites': [], 'CustodialSubType': 'Re
plica', 'NonCustodialSubType': 'Replica', 'CustodialGroup': 'DataOps', 'NonCustodialGroup': 'DataOps', 'SubscriptionPriority': 'Low', 'DeleteFromSource': False, 'MinMergeSize': 2147483648, 'MaxMergeSize': 4294967296, 'MaxWaitTime': 86400,
'MaxMergeEvents': 100000000, 'AllowOpportunistic': False, 'BlockCloseMaxWaitTime': 66400, 'BlockCloseMaxFiles': 500, 'BlockCloseMaxEvents': 25000000, 'BlockCloseMaxSize': 5000000000000, 'ChainParentageMap': {}}: Traceback (most recent ca
ll last):
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/ReqMgr/Service/Request.py", line 436, in _handleAssignmentStateTransition
workload.updateArguments(request_args)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 1969, in updateArguments
self.setLFNBase(kwargs["MergedLFNBase"], kwargs["UnmergedLFNBase"])
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 1154, in setLFNBase
self.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMWorkload.py", line 806, in updateLFNsAndDatasets
task.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1783, in updateLFNsAndDatasets
task.updateLFNsAndDatasets(runNumber=runNumber)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1774, in updateLFNsAndDatasets
self.updateDatasetName(datasetName)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/WMTask.py", line 1801, in updateDatasetName
cmsswHelper.setDatasetName(datasetName)
File "/data/srv/HG2107b/sw/slc7_amd64_gcc630/cms/reqmgr2/1.5.0.pre5/lib/python3.8/site-packages/WMCore/WMSpec/Steps/Templates/CMSSW.py", line 241, in setDatasetName
args = pickle.loads(self.data.application.configuration.pickledarguments)
TypeError: a bytes-like object is required, not 'str'
``` | priority | fails to set pickledarguments during assignment impact of the bug describe the bug note that this seem to be a rare case since i managed to inject all the dmwm integration tests and a few old requests already however when i try to assign this workflow amaltaro sc redigi harvest val it fails with a bad request the logs extracted from testbed pod can be seen in how to reproduce it not sure there must be something very peculiar about this one workflow expected behavior pickle requires data to be a bytes like object instead of str so we should encode the data we pass over to that pickle loads call additional context and error message updating request amaltaro sc redigi harvest val with these user provided args softtimeout unmergedlfnbase store unmerged trustpusitelists true graceperiod sitewhitelist mergedlfnbase store backfill team testbed processingversion trustsitelists false processingstring val old dashboard test site blacklist override eos lfn prefix root eoscms cern ch eos cms store logs prod recent testbed acquisitionera cmssw requeststatus assigned getting from cache due to couchnotfounderror reason object not found data result none assign request amaltaro sc redigi harvest val input args softtimeout unmergedlfnbase store unmerged trustpusitelists true graceperiod sitewhitelist us fnal ch cern mergedlfnbase store backfill team testbed processingversion trustsitelists false processingstring val old dashboard test siteblacklist overr ide eos lfn prefix root eoscms cern ch eos cms store logs prod recent testbed acquisitionera cmssw requeststatus assigned hardtimeout error for request args softtimeout unmergedlfnbase store unmerged trustpusitelists true graceperiod sitewhitelist mergedlfnbase store backfill team testbed processingversion trustsitelists false processingstring val old dashboard test siteblacklist override eos lfn prefix root eoscms cern ch eos cms store l ogs prod recent testbed acquisitionera cmssw requeststatus assigned hardtimeout eventstreams custodialsites noncustodialsites autoapprovesubscriptionsites custodialsubtype re plica noncustodialsubtype replica custodialgroup dataops noncustodialgroup dataops subscriptionpriority low deletefromsource false minmergesize maxmergesize maxwaittime maxmergeevents allowopportunistic false blockclosemaxwaittime blockclosemaxfiles blockclosemaxevents blockclosemaxsize chainparentagemap traceback most recent ca ll last file data srv sw cms lib site packages wmcore reqmgr service request py line in handleassignmentstatetransition workload updatearguments request args file data srv sw cms lib site packages wmcore wmspec wmworkload py line in updatearguments self setlfnbase kwargs kwargs file data srv sw cms lib site packages wmcore wmspec wmworkload py line in setlfnbase self updatelfnsanddatasets runnumber runnumber file data srv sw cms lib site packages wmcore wmspec wmworkload py line in updatelfnsanddatasets task updatelfnsanddatasets runnumber runnumber file data srv sw cms lib site packages wmcore wmspec wmtask py line in updatelfnsanddatasets task updatelfnsanddatasets runnumber runnumber file data srv sw cms lib site packages wmcore wmspec wmtask py line in updatelfnsanddatasets self updatedatasetname datasetname file data srv sw cms lib site packages wmcore wmspec wmtask py line in updatedatasetname cmsswhelper setdatasetname datasetname file data srv sw cms lib site packages wmcore wmspec steps templates cmssw py line in setdatasetname args pickle loads self data application configuration pickledarguments typeerror a bytes like object is required not str | 1 |
237,232 | 7,757,661,260 | IssuesEvent | 2018-05-31 17:01:03 | MRN-Code/coinstac | https://api.github.com/repos/MRN-Code/coinstac | closed | UI: selection box for freesurfer regions of interest is bonkers | bug high priority ui | # Problem
The selection box is impossibly hard to use to select more than one item, or use at all really.
# Tasks
* fix dat trash | 1.0 | UI: selection box for freesurfer regions of interest is bonkers - # Problem
The selection box is impossibly hard to use to select more than one item, or use at all really.
# Tasks
* fix dat trash | priority | ui selection box for freesurfer regions of interest is bonkers problem the selection box is impossibly hard to use to select more than one item or use at all really tasks fix dat trash | 1 |
220,539 | 7,360,835,294 | IssuesEvent | 2018-03-10 22:47:49 | smit-happens/YCP_EVOS | https://api.github.com/repos/smit-happens/YCP_EVOS | closed | configure remote unit testing with travis CI | enhancement priority-high size-small | Very good progress happening, need to connect rpi with travis CI with the right configurations in `.travis.yml`
[useful download link here](http://docs.platformio.org/en/latest/installation.html#installation-methods) | 1.0 | configure remote unit testing with travis CI - Very good progress happening, need to connect rpi with travis CI with the right configurations in `.travis.yml`
[useful download link here](http://docs.platformio.org/en/latest/installation.html#installation-methods) | priority | configure remote unit testing with travis ci very good progress happening need to connect rpi with travis ci with the right configurations in travis yml | 1 |
56,578 | 3,080,512,105 | IssuesEvent | 2015-08-21 22:47:11 | datarank/tempest | https://api.github.com/repos/datarank/tempest | closed | New indexes are always allocated to the smallest node | bug Priority: 4 (High) | New indexes are typically indexes that are going to be rapidly inserted into. Having all newly created indexes on the smallest node means that, for multiple bulk imports, all new data is being loaded onto a single node before a rebalance can occur. This can theoretically lead temporarily to one node growing and becoming much larger than others, and will also cause slow insertion speed since all new data is being written on one node instead of across the entire cluster.
Modify the allocation scheme such that new shards are distributed uniformly, rather than always to the smallest node. | 1.0 | New indexes are always allocated to the smallest node - New indexes are typically indexes that are going to be rapidly inserted into. Having all newly created indexes on the smallest node means that, for multiple bulk imports, all new data is being loaded onto a single node before a rebalance can occur. This can theoretically lead temporarily to one node growing and becoming much larger than others, and will also cause slow insertion speed since all new data is being written on one node instead of across the entire cluster.
Modify the allocation scheme such that new shards are distributed uniformly, rather than always to the smallest node. | priority | new indexes are always allocated to the smallest node new indexes are typically indexes that are going to be rapidly inserted into having all newly created indexes on the smallest node means that for multiple bulk imports all new data is being loaded onto a single node before a rebalance can occur this can theoretically lead temporarily to one node growing and becoming much larger than others and will also cause slow insertion speed since all new data is being written on one node instead of across the entire cluster modify the allocation scheme such that new shards are distributed uniformly rather than always to the smallest node | 1 |
447,756 | 12,892,875,324 | IssuesEvent | 2020-07-13 20:28:26 | rstudio/gt | https://api.github.com/repos/rstudio/gt | closed | Custom CSS classes | Difficulty: [3] Advanced Effort: [3] High Priority: [3] High Type: ★ Enhancement | I may be missing something but I can't find anyway within the package to add a custom CSS class to a table (or cell etc)? There may be a good reason this isn't supported but I can't see it myself and it's a little frustrating. | 1.0 | Custom CSS classes - I may be missing something but I can't find anyway within the package to add a custom CSS class to a table (or cell etc)? There may be a good reason this isn't supported but I can't see it myself and it's a little frustrating. | priority | custom css classes i may be missing something but i can t find anyway within the package to add a custom css class to a table or cell etc there may be a good reason this isn t supported but i can t see it myself and it s a little frustrating | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.