Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
513,033 | 14,914,681,405 | IssuesEvent | 2021-01-22 15:44:03 | bounswe/bounswe2020group9 | https://api.github.com/repos/bounswe/bounswe2020group9 | closed | Notification API - POST does not make all notifications "visited" | Frontend Priority - Low bug | I was using [This POST Notification API Call](https://github.com/bounswe/bounswe2020group9/wiki/API-Documentation#apimessagenotifications-post) and it seems like something is wrong. The API does not mark all notifications as `is_visited:true` as it intends. | 1.0 | Notification API - POST does not make all notifications "visited" - I was using [This POST Notification API Call](https://github.com/bounswe/bounswe2020group9/wiki/API-Documentation#apimessagenotifications-post) and it seems like something is wrong. The API does not mark all notifications as `is_visited:true` as it intends. | priority | notification api post does not make all notifications visited i was using and it seems like something is wrong the api does not mark all notifications as is visited true as it intends | 1 |
296,368 | 9,108,304,470 | IssuesEvent | 2019-02-21 08:07:50 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Localization: missed space? | Fixed Localization Low Priority | In strings
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67942
> Profession Experience/Day:{0}\n
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67944
> Specialty Experience Multiplier:{0}
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67940
> Base Multiplier:{0} | 1.0 | Localization: missed space? - In strings
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67942
> Profession Experience/Day:{0}\n
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67944
> Specialty Experience Multiplier:{0}
https://crowdin.com/translate/eco-by-strange-loop-games/40/en-ru#67940
> Base Multiplier:{0} | priority | localization missed space in strings profession experience day n specialty experience multiplier base multiplier | 1 |
323,203 | 9,851,613,818 | IssuesEvent | 2019-06-19 10:52:42 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | to use {k_wakeup} to cancel the delayed startup of one thread | area: Kernel enhancement priority: low | **_Reported by Quanwen Du:_**
Some thoughts:
1). From the interface name view: the interfaces k_thread_cancel and k_thread_abort are a little confusion to me, I always tried best to recall which one is for what.
2). From logic view: in the current logic(v1.8) of k_thread_cancel, it does not trigger schedule to the delay canceled task which may be higher priority than the current one. what's the consideration not to trigger the scheduling? Yes k_thread_cancel can be used before the scheduler is started, but if this is the only use case, then the interface shall add notes for this. Also if the logic is able to decide whether to cancel the delay before the scheduler is started, we may suggest the designer NOT to define the delay start.
3). From interface duplication view: the interface k_wakeup is able to serve the same/more funcationality than k_thread_cancel, but due to 2) above, currently k_wakeup can not replace k_thread_cancel. But it seems not so good to keep these 2 interfaces there.
Based on these, I am suggesting to remove the interface k_thread_cancel as k_wakeup is good enough already, right?
(Imported from Jira ZEP-2415) | 1.0 | to use {k_wakeup} to cancel the delayed startup of one thread - **_Reported by Quanwen Du:_**
Some thoughts:
1). From the interface name view: the interfaces k_thread_cancel and k_thread_abort are a little confusion to me, I always tried best to recall which one is for what.
2). From logic view: in the current logic(v1.8) of k_thread_cancel, it does not trigger schedule to the delay canceled task which may be higher priority than the current one. what's the consideration not to trigger the scheduling? Yes k_thread_cancel can be used before the scheduler is started, but if this is the only use case, then the interface shall add notes for this. Also if the logic is able to decide whether to cancel the delay before the scheduler is started, we may suggest the designer NOT to define the delay start.
3). From interface duplication view: the interface k_wakeup is able to serve the same/more funcationality than k_thread_cancel, but due to 2) above, currently k_wakeup can not replace k_thread_cancel. But it seems not so good to keep these 2 interfaces there.
Based on these, I am suggesting to remove the interface k_thread_cancel as k_wakeup is good enough already, right?
(Imported from Jira ZEP-2415) | priority | to use k wakeup to cancel the delayed startup of one thread reported by quanwen du some thoughts from the interface name view the interfaces k thread cancel and k thread abort are a little confusion to me i always tried best to recall which one is for what from logic view in the current logic of k thread cancel it does not trigger schedule to the delay canceled task which may be higher priority than the current one what s the consideration not to trigger the scheduling yes k thread cancel can be used before the scheduler is started but if this is the only use case then the interface shall add notes for this also if the logic is able to decide whether to cancel the delay before the scheduler is started we may suggest the designer not to define the delay start from interface duplication view the interface k wakeup is able to serve the same more funcationality than k thread cancel but due to above currently k wakeup can not replace k thread cancel but it seems not so good to keep these interfaces there based on these i am suggesting to remove the interface k thread cancel as k wakeup is good enough already right imported from jira zep | 1 |
100,962 | 4,104,631,681 | IssuesEvent | 2016-06-05 14:16:08 | minio/minio | https://api.github.com/repos/minio/minio | closed | server: Feature request: custom content types | enhancement future low-priority | Ideally all content types would be added to https://github.com/jshttp/mime-db, but I still think it is would be a good feature. | 1.0 | server: Feature request: custom content types - Ideally all content types would be added to https://github.com/jshttp/mime-db, but I still think it is would be a good feature. | priority | server feature request custom content types ideally all content types would be added to but i still think it is would be a good feature | 1 |
357,297 | 10,604,794,244 | IssuesEvent | 2019-10-10 18:58:44 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Wrong path to milestone image | Bug Fix Proposed HacktoberFest Low Priority | Subj.

#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.10.4
| 1.0 | Wrong path to milestone image - Subj.

#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.10.4
| priority | wrong path to milestone image subj your environment suitecrm version used | 1 |
91,217 | 3,840,849,405 | IssuesEvent | 2016-04-04 00:20:14 | dompdf/dompdf | https://api.github.com/repos/dompdf/dompdf | closed | error Cellmap | bug Priority-Low | hi,i have the next problem;
Fatal error: Call to undefined method DOMText::getAttribute() in /home/tonotour/public_html/dompdf/src/Cellmap.php on line 555
the line 555 is $colspan = $node->getAttribute("colspan");
what's happen? | 1.0 | error Cellmap - hi,i have the next problem;
Fatal error: Call to undefined method DOMText::getAttribute() in /home/tonotour/public_html/dompdf/src/Cellmap.php on line 555
the line 555 is $colspan = $node->getAttribute("colspan");
what's happen? | priority | error cellmap hi i have the next problem fatal error call to undefined method domtext getattribute in home tonotour public html dompdf src cellmap php on line the line is colspan node getattribute colspan what s happen | 1 |
65,874 | 3,244,715,606 | IssuesEvent | 2015-10-16 05:17:25 | stuicey/ApolloStation | https://api.github.com/repos/stuicey/ApolloStation | closed | Dark Areas | mapping priority: low | Areas that are just too dark
- [x] EVA
- [x] Bar
- [x] Basement
- [x] Starboard Primary
- [x] Medbay Staff Room
- [x] Operating Theatre
- [x] HOP Office | 1.0 | Dark Areas - Areas that are just too dark
- [x] EVA
- [x] Bar
- [x] Basement
- [x] Starboard Primary
- [x] Medbay Staff Room
- [x] Operating Theatre
- [x] HOP Office | priority | dark areas areas that are just too dark eva bar basement starboard primary medbay staff room operating theatre hop office | 1 |
442,030 | 12,736,616,190 | IssuesEvent | 2020-06-25 17:13:55 | gigantum/gigantum-client | https://api.github.com/repos/gigantum/gigantum-client | opened | Bouncing notification area when multiple jobs are updating at once | bug priority:low team:frontend | **Describe the bug**
In some cases (a specific example is provided below), the notification area "bounces" - closing and opening again and again on its own. This is distracting, and can only be stopped by positioning the cursor where the X icon will be, and timing the click well.
**To Reproduce**
Steps to reproduce the behavior:
1. Start importing a project:
- e.g., have a zip project available
- Ensure the base-image for it is deleted (via docker rmi)
- Start the import
2. While the above is still building (e.g. downloading the relevant base), import another project (I did this from a URL)
3. I see the notifications bounce at this time. The bouncing stops once one of the tasks is complete
**Expected behavior**
A new notification should make the notification area pop open once. Subsequent updates to the notification should leave the area open or closed as it was.
**Screenshots**
Difficult to generate on Windows - ask if you want one and I'll try harder.
**Environment (please complete the following information):**
- OS: Windows (classic / Hyper-V)
- Browser: Firefox
- Gigantum Client Version 1.3.2rc (probably released) | 1.0 | Bouncing notification area when multiple jobs are updating at once - **Describe the bug**
In some cases (a specific example is provided below), the notification area "bounces" - closing and opening again and again on its own. This is distracting, and can only be stopped by positioning the cursor where the X icon will be, and timing the click well.
**To Reproduce**
Steps to reproduce the behavior:
1. Start importing a project:
- e.g., have a zip project available
- Ensure the base-image for it is deleted (via docker rmi)
- Start the import
2. While the above is still building (e.g. downloading the relevant base), import another project (I did this from a URL)
3. I see the notifications bounce at this time. The bouncing stops once one of the tasks is complete
**Expected behavior**
A new notification should make the notification area pop open once. Subsequent updates to the notification should leave the area open or closed as it was.
**Screenshots**
Difficult to generate on Windows - ask if you want one and I'll try harder.
**Environment (please complete the following information):**
- OS: Windows (classic / Hyper-V)
- Browser: Firefox
- Gigantum Client Version 1.3.2rc (probably released) | priority | bouncing notification area when multiple jobs are updating at once describe the bug in some cases a specific example is provided below the notification area bounces closing and opening again and again on its own this is distracting and can only be stopped by positioning the cursor where the x icon will be and timing the click well to reproduce steps to reproduce the behavior start importing a project e g have a zip project available ensure the base image for it is deleted via docker rmi start the import while the above is still building e g downloading the relevant base import another project i did this from a url i see the notifications bounce at this time the bouncing stops once one of the tasks is complete expected behavior a new notification should make the notification area pop open once subsequent updates to the notification should leave the area open or closed as it was screenshots difficult to generate on windows ask if you want one and i ll try harder environment please complete the following information os windows classic hyper v browser firefox gigantum client version probably released | 1 |
109,292 | 4,384,641,779 | IssuesEvent | 2016-08-08 04:21:32 | DavidLu1997/gogogo | https://api.github.com/repos/DavidLu1997/gogogo | closed | Matchmaking API documentation | Awaiting Other Issue Improvement Low Priority Next Release | Documentation for the matchmaking API, either `godocs` or hand-written docs are fine
2 mana | 1.0 | Matchmaking API documentation - Documentation for the matchmaking API, either `godocs` or hand-written docs are fine
2 mana | priority | matchmaking api documentation documentation for the matchmaking api either godocs or hand written docs are fine mana | 1 |
430,215 | 12,441,305,977 | IssuesEvent | 2020-05-26 13:28:02 | guilds-plugin/Guilds | https://api.github.com/repos/guilds-plugin/Guilds | closed | Pay with guild money | Priority: Low Type: Feature | It would be a good idea to add an order to pay with guild money. Example: /guild pay 1000
it would take money directly to the guild bank | 1.0 | Pay with guild money - It would be a good idea to add an order to pay with guild money. Example: /guild pay 1000
it would take money directly to the guild bank | priority | pay with guild money it would be a good idea to add an order to pay with guild money example guild pay it would take money directly to the guild bank | 1 |
159,087 | 6,040,459,217 | IssuesEvent | 2017-06-10 14:36:31 | Wuzzy2/MineClone2-Bugs | https://api.github.com/repos/Wuzzy2/MineClone2-Bugs | reopened | boots arent on a higher layer than the leggings | bug graphics low priority | it needs to have a other place on the texture than in mc to be 3d:
<img width="64" alt="3d_armor_boots_chain" src="https://user-images.githubusercontent.com/29333817/27003390-ee507506-4df5-11e7-99b1-e76c270a2794.png">
<img width="64" alt="3d_armor_boots_diamond" src="https://user-images.githubusercontent.com/29333817/27003393-ee55c07e-4df5-11e7-9a03-1764e0d7bddc.png">
<img width="64" alt="3d_armor_boots_gold" src="https://user-images.githubusercontent.com/29333817/27003392-ee547d86-4df5-11e7-9834-ae47257fa81d.png">
<img width="64" alt="3d_armor_boots_leather" src="https://user-images.githubusercontent.com/29333817/27003394-ee583e80-4df5-11e7-8f10-ae56abd2b1bc.png">
<img width="64" alt="3d_armor_boots_steel" src="https://user-images.githubusercontent.com/29333817/27003391-ee53f438-4df5-11e7-8ed3-4ce25cba2448.png">
| 1.0 | boots arent on a higher layer than the leggings - it needs to have a other place on the texture than in mc to be 3d:
<img width="64" alt="3d_armor_boots_chain" src="https://user-images.githubusercontent.com/29333817/27003390-ee507506-4df5-11e7-99b1-e76c270a2794.png">
<img width="64" alt="3d_armor_boots_diamond" src="https://user-images.githubusercontent.com/29333817/27003393-ee55c07e-4df5-11e7-9a03-1764e0d7bddc.png">
<img width="64" alt="3d_armor_boots_gold" src="https://user-images.githubusercontent.com/29333817/27003392-ee547d86-4df5-11e7-9834-ae47257fa81d.png">
<img width="64" alt="3d_armor_boots_leather" src="https://user-images.githubusercontent.com/29333817/27003394-ee583e80-4df5-11e7-8f10-ae56abd2b1bc.png">
<img width="64" alt="3d_armor_boots_steel" src="https://user-images.githubusercontent.com/29333817/27003391-ee53f438-4df5-11e7-8ed3-4ce25cba2448.png">
| priority | boots arent on a higher layer than the leggings it needs to have a other place on the texture than in mc to be img width alt armor boots chain src img width alt armor boots diamond src img width alt armor boots gold src img width alt armor boots leather src img width alt armor boots steel src | 1 |
310,145 | 9,486,347,304 | IssuesEvent | 2019-04-22 13:44:14 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | closed | [LOCALIZATION] | Fel-tainted description is too long for "features" screen. | :beetle: bug - localization :scroll: :grey_exclamation: priority low | **Mod Version**
Master Branch
**Please explain your issue in as much detail as possible:**
Fel-tainted description is too long for "features" screen.
**Upload screenshots of the problem localization:**
<details>
<summary>Click to expand</summary>

</details> | 1.0 | [LOCALIZATION] | Fel-tainted description is too long for "features" screen. - **Mod Version**
Master Branch
**Please explain your issue in as much detail as possible:**
Fel-tainted description is too long for "features" screen.
**Upload screenshots of the problem localization:**
<details>
<summary>Click to expand</summary>

</details> | priority | fel tainted description is too long for features screen mod version master branch please explain your issue in as much detail as possible fel tainted description is too long for features screen upload screenshots of the problem localization click to expand | 1 |
424,024 | 12,305,079,383 | IssuesEvent | 2020-05-11 21:42:43 | bluek8s/kubedirector | https://api.github.com/repos/bluek8s/kubedirector | closed | Feature Request: provide instructions for running on minikube | Priority: Low Project: Platform Support Type: Enhancement | I would like to learn about kubedirector on my laptop when I am travelling with no/poor Internet connection. It would be really good to have instructions for setting up KD on minikube. | 1.0 | Feature Request: provide instructions for running on minikube - I would like to learn about kubedirector on my laptop when I am travelling with no/poor Internet connection. It would be really good to have instructions for setting up KD on minikube. | priority | feature request provide instructions for running on minikube i would like to learn about kubedirector on my laptop when i am travelling with no poor internet connection it would be really good to have instructions for setting up kd on minikube | 1 |
320,200 | 9,777,772,209 | IssuesEvent | 2019-06-07 10:01:18 | thonny/thonny | https://api.github.com/repos/thonny/thonny | closed | gc.get_objects() failing | bug low priority | **[Original report](https://bitbucket.org/bitbucket-issue-migration\thonny-issues.zip/issue/616) by Anonymous.**
----------------------------------------
Try this:
```
>>> import gc
>>> dir(gc
<output>
>>> gc.get_objects()
PROBLEM WITH THONNY'S BACK-END:
Traceback (most recent call last):
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/thonny/backend.py", line 1133, in _execute_prepared_user_code
return {"value_info": self._vm.export_value(value)}
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/thonny/backend.py", line 840, in export_value
rep = repr(value)
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/parso/grammar.py", line 189, in __repr__
nonterminals = self._pgen_grammar._nonterminal_to_dfas.keys()
AttributeError: 'Grammar' object has no attribute '_nonterminal_to_dfas'
During handling of the above exception, another exception occurred:
```
| 1.0 | gc.get_objects() failing - **[Original report](https://bitbucket.org/bitbucket-issue-migration\thonny-issues.zip/issue/616) by Anonymous.**
----------------------------------------
Try this:
```
>>> import gc
>>> dir(gc
<output>
>>> gc.get_objects()
PROBLEM WITH THONNY'S BACK-END:
Traceback (most recent call last):
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/thonny/backend.py", line 1133, in _execute_prepared_user_code
return {"value_info": self._vm.export_value(value)}
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/thonny/backend.py", line 840, in export_value
rep = repr(value)
File "/Applications/Thonny.app/Contents/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/parso/grammar.py", line 189, in __repr__
nonterminals = self._pgen_grammar._nonterminal_to_dfas.keys()
AttributeError: 'Grammar' object has no attribute '_nonterminal_to_dfas'
During handling of the above exception, another exception occurred:
```
| priority | gc get objects failing by anonymous try this import gc dir gc gc get objects problem with thonny s back end traceback most recent call last file applications thonny app contents frameworks python framework versions lib site packages thonny backend py line in execute prepared user code return value info self vm export value value file applications thonny app contents frameworks python framework versions lib site packages thonny backend py line in export value rep repr value file applications thonny app contents frameworks python framework versions lib site packages parso grammar py line in repr nonterminals self pgen grammar nonterminal to dfas keys attributeerror grammar object has no attribute nonterminal to dfas during handling of the above exception another exception occurred | 1 |
438,275 | 12,625,521,774 | IssuesEvent | 2020-06-14 12:20:00 | hoelzer/poseidon | https://api.github.com/repos/hoelzer/poseidon | opened | GARD deactivation | enhancement priority (low) | I am thinking about making GARD an optional step or at least add some parameter to disable GARD. It's simply a time-consuming step. Although, recombination testing is important when looking for positive selection.
But when a user has a rather large input FASTA and is not at all interested in recombination detection it would be nice to have some functionality to deactivate GARD. | 1.0 | GARD deactivation - I am thinking about making GARD an optional step or at least add some parameter to disable GARD. It's simply a time-consuming step. Although, recombination testing is important when looking for positive selection.
But when a user has a rather large input FASTA and is not at all interested in recombination detection it would be nice to have some functionality to deactivate GARD. | priority | gard deactivation i am thinking about making gard an optional step or at least add some parameter to disable gard it s simply a time consuming step although recombination testing is important when looking for positive selection but when a user has a rather large input fasta and is not at all interested in recombination detection it would be nice to have some functionality to deactivate gard | 1 |
778,583 | 27,321,240,813 | IssuesEvent | 2023-02-24 20:03:22 | JavaMoney/jsr354-ri | https://api.github.com/repos/JavaMoney/jsr354-ri | closed | IDENT exchange rate provider can be reduced | question deferred Priority: Low | > IDENT provides rates with a factor of 1.0, where base and target currency are the same.
First of all, we can add a case with zero amount: 0 is always 0 in any other currency.
But as for me this looks strange to have a rate provider for the case while this can be handled by rate provider caller because this is common logic for all rate providers.
Also from performance point of view the IDENT provider should be first in chain of providers but it is easy to forgot.
So can we get rid off the IDENT rate provider?
# | 1.0 | IDENT exchange rate provider can be reduced - > IDENT provides rates with a factor of 1.0, where base and target currency are the same.
First of all, we can add a case with zero amount: 0 is always 0 in any other currency.
But as for me this looks strange to have a rate provider for the case while this can be handled by rate provider caller because this is common logic for all rate providers.
Also from performance point of view the IDENT provider should be first in chain of providers but it is easy to forgot.
So can we get rid off the IDENT rate provider?
# | priority | ident exchange rate provider can be reduced ident provides rates with a factor of where base and target currency are the same first of all we can add a case with zero amount is always in any other currency but as for me this looks strange to have a rate provider for the case while this can be handled by rate provider caller because this is common logic for all rate providers also from performance point of view the ident provider should be first in chain of providers but it is easy to forgot so can we get rid off the ident rate provider | 1 |
603,333 | 18,541,374,219 | IssuesEvent | 2021-10-21 16:33:29 | onicagroup/runway | https://api.github.com/repos/onicagroup/runway | opened | [BUG] assume_role not expanding Lookup | bug priority:low status:review_needed | ### Bug Description
When I define a Module with `assume_role` of the form:
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
The Variable look up doesn't happen and I get an error:
```
[runway] deployment_1:processing deployment (in progress)
[runway] deployment_1:processing regions in parallel... (output will be interwoven)
[runway] assuming role ${var assume_role}...
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/components/_deployment.py", line 175, in run
with aws.AssumeRole(context, **self.assume_role_config):
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/providers/aws/_assume_role.py", line 127, in __enter__
self.assume()
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/providers/aws/_assume_role.py", line 87, in assume
response = sts_client.assume_role(**self._kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 388, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 680, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 728, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/validate.py", line 360, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid length for parameter RoleArn, value: 18, valid min length: 20
"""
```
As you can see the assume role tries to assume the role `${var assume_role}` it is not the expanded value of the `assume_role` variable.
If I run the short version:
```
assume_role: ${var assume_role}
```
It works correctly, but I can't set the `post_deploy_env_revert` element (as you might imagine).
### Expected Behavior
Expand the `${var assume_role}` Lookup when using the format:
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
### Steps To Reproduce
1. Create a a variable in `runway.variables.yml`
```
assume_role: arn:aws:iam::<YOUR ACCOUNT ID>:role/<YOUR ROLE NAME>
```
2. In your `runway.yml` use the assume role
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
### Runway version
2.4.2
### Installation Type
direct download (curl, wget, etc)
### OS / Environment
Linux ubuntu-20 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Anything else?
_No response_ | 1.0 | [BUG] assume_role not expanding Lookup - ### Bug Description
When I define a Module with `assume_role` of the form:
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
The Variable look up doesn't happen and I get an error:
```
[runway] deployment_1:processing deployment (in progress)
[runway] deployment_1:processing regions in parallel... (output will be interwoven)
[runway] assuming role ${var assume_role}...
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/components/_deployment.py", line 175, in run
with aws.AssumeRole(context, **self.assume_role_config):
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/providers/aws/_assume_role.py", line 127, in __enter__
self.assume()
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/runway/core/providers/aws/_assume_role.py", line 87, in assume
response = sts_client.assume_role(**self._kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 388, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 680, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/client.py", line 728, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/botocore/validate.py", line 360, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid length for parameter RoleArn, value: 18, valid min length: 20
"""
```
As you can see the assume role tries to assume the role `${var assume_role}` it is not the expanded value of the `assume_role` variable.
If I run the short version:
```
assume_role: ${var assume_role}
```
It works correctly, but I can't set the `post_deploy_env_revert` element (as you might imagine).
### Expected Behavior
Expand the `${var assume_role}` Lookup when using the format:
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
### Steps To Reproduce
1. Create a a variable in `runway.variables.yml`
```
assume_role: arn:aws:iam::<YOUR ACCOUNT ID>:role/<YOUR ROLE NAME>
```
2. In your `runway.yml` use the assume role
```
assume_role: ${var assume_role}
arn: ${var assume_role}
post_deploy_env_revert: True
```
### Runway version
2.4.2
### Installation Type
direct download (curl, wget, etc)
### OS / Environment
Linux ubuntu-20 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Anything else?
_No response_ | priority | assume role not expanding lookup bug description when i define a module with assume role of the form assume role var assume role arn var assume role post deploy env revert true the variable look up doesn t happen and i get an error deployment processing deployment in progress deployment processing regions in parallel output will be interwoven assuming role var assume role concurrent futures process remotetraceback traceback most recent call last file home circleci pyenv versions lib concurrent futures process py line in process worker r call item fn call item args call item kwargs file home circleci pyenv versions lib site packages runway core components deployment py line in run with aws assumerole context self assume role config file home circleci pyenv versions lib site packages runway core providers aws assume role py line in enter self assume file home circleci pyenv versions lib site packages runway core providers aws assume role py line in assume response sts client assume role self kwargs file home circleci pyenv versions lib site packages botocore client py line in api call return self make api call operation name kwargs file home circleci pyenv versions lib site packages botocore client py line in make api call request dict self convert to request dict file home circleci pyenv versions lib site packages botocore client py line in convert to request dict request dict self serializer serialize to request file home circleci pyenv versions lib site packages botocore validate py line in serialize to request raise paramvalidationerror report report generate report botocore exceptions paramvalidationerror parameter validation failed invalid length for parameter rolearn value valid min length as you can see the assume role tries to assume the role var assume role it is not the expanded value of the assume role variable if i run the short version assume role var assume role it works correctly but i can t set the post deploy env revert element as you might imagine expected behavior expand the var assume role lookup when using the format assume role var assume role arn var assume role post deploy env revert true steps to reproduce create a a variable in runway variables yml assume role arn aws iam role in your runway yml use the assume role assume role var assume role arn var assume role post deploy env revert true runway version installation type direct download curl wget etc os environment linux ubuntu generic ubuntu smp wed dec utc gnu linux anything else no response | 1 |
738,119 | 25,546,162,967 | IssuesEvent | 2022-11-29 18:59:50 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-501] Add calendar select is visually broken on /apps/installed/calendar | Low priority | ### Issue Summary

<sub>[CAL-501](https://linear.app/calcom/issue/CAL-501/add-calendar-select-is-broken-on-appsinstalledcalendar)</sub> | 1.0 | [CAL-501] Add calendar select is visually broken on /apps/installed/calendar - ### Issue Summary

<sub>[CAL-501](https://linear.app/calcom/issue/CAL-501/add-calendar-select-is-broken-on-appsinstalledcalendar)</sub> | priority | add calendar select is visually broken on apps installed calendar issue summary | 1 |
303,506 | 9,307,809,912 | IssuesEvent | 2019-03-25 13:14:28 | deep-compute/gitkanban | https://api.github.com/repos/deep-compute/gitkanban | opened | Create a New label called needs-testing. | about-hour-effort low-impact low-priority ready-status | * Once development is done before moving to dev, pre-staging, staging we have to do testing so based on the prashanthellina discussion we have to create a new label called `needs-testing`. | 1.0 | Create a New label called needs-testing. - * Once development is done before moving to dev, pre-staging, staging we have to do testing so based on the prashanthellina discussion we have to create a new label called `needs-testing`. | priority | create a new label called needs testing once development is done before moving to dev pre staging staging we have to do testing so based on the prashanthellina discussion we have to create a new label called needs testing | 1 |
263,111 | 8,273,615,086 | IssuesEvent | 2018-09-17 06:49:48 | phusion/passenger | https://api.github.com/repos/phusion/passenger | closed | If the app sends an invalid socket URI, the HelperAgent crashes | Bounty/Easy Priority/Low | If the app reports an invalid socket URI, like "tcp:undefined", the HelperAgent crashes. This is because when the Spawner parses the URI, an ArgumentException is raised that isn't caught.
| 1.0 | If the app sends an invalid socket URI, the HelperAgent crashes - If the app reports an invalid socket URI, like "tcp:undefined", the HelperAgent crashes. This is because when the Spawner parses the URI, an ArgumentException is raised that isn't caught.
| priority | if the app sends an invalid socket uri the helperagent crashes if the app reports an invalid socket uri like tcp undefined the helperagent crashes this is because when the spawner parses the uri an argumentexception is raised that isn t caught | 1 |
90,880 | 3,833,800,893 | IssuesEvent | 2016-04-01 06:32:37 | Wraithaven/WraithEngine2 | https://api.github.com/repos/Wraithaven/WraithEngine2 | opened | Redesign Overall Appearance | priority:low type:feature | The editor is pretty ugly. Go back and redesign it to look nicer, and even have it's own look and feel. I'm also thinking maybe the ability to switch between a light and a dark skin design. No images, just gradients. | 1.0 | Redesign Overall Appearance - The editor is pretty ugly. Go back and redesign it to look nicer, and even have it's own look and feel. I'm also thinking maybe the ability to switch between a light and a dark skin design. No images, just gradients. | priority | redesign overall appearance the editor is pretty ugly go back and redesign it to look nicer and even have it s own look and feel i m also thinking maybe the ability to switch between a light and a dark skin design no images just gradients | 1 |
331,874 | 10,077,890,004 | IssuesEvent | 2019-07-24 19:52:33 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | closed | Confusing implementation of labelTypeToId function (any idea why?) | EasyFix! Priority: Low discussion | So in the label table, the label types are represented by an integer. The integers are then mapped to the string that represents that label type in the label_type table. Obviously, having a function that converts a label_type_id to a label_type string is very useful. I looked at the implementation in the LabelTypeTable.scala file...
But it doesn't make sense to me. It returns the correct result, but if the label type is not already in the label_type table, then it enters a new record in the table... This seems like a really weird side effect that shouldn't be happening. Does anyone know why it was designed this way..? If there is a good reason to do this, then at least the name of the function and the documentation on what it does should change, but I think that we should probably just get rid of that side-effect if no one knows why it is currently there. | 1.0 | Confusing implementation of labelTypeToId function (any idea why?) - So in the label table, the label types are represented by an integer. The integers are then mapped to the string that represents that label type in the label_type table. Obviously, having a function that converts a label_type_id to a label_type string is very useful. I looked at the implementation in the LabelTypeTable.scala file...
But it doesn't make sense to me. It returns the correct result, but if the label type is not already in the label_type table, then it enters a new record in the table... This seems like a really weird side effect that shouldn't be happening. Does anyone know why it was designed this way..? If there is a good reason to do this, then at least the name of the function and the documentation on what it does should change, but I think that we should probably just get rid of that side-effect if no one knows why it is currently there. | priority | confusing implementation of labeltypetoid function any idea why so in the label table the label types are represented by an integer the integers are then mapped to the string that represents that label type in the label type table obviously having a function that converts a label type id to a label type string is very useful i looked at the implementation in the labeltypetable scala file but it doesn t make sense to me it returns the correct result but if the label type is not already in the label type table then it enters a new record in the table this seems like a really weird side effect that shouldn t be happening does anyone know why it was designed this way if there is a good reason to do this then at least the name of the function and the documentation on what it does should change but i think that we should probably just get rid of that side effect if no one knows why it is currently there | 1 |
610,098 | 18,893,876,509 | IssuesEvent | 2021-11-15 15:51:34 | stackabletech/issues | https://api.github.com/repos/stackabletech/issues | opened | Write a simple CLI utility to bulk-create issues in ZenHub | priority/low | The tool should be able to use the ZenHub API to create issues in all our operator repositories and
- set their title
- set their description
- set labels
- set the sprint
- set the epic
I'm not even sure if the API allows all of that. | 1.0 | Write a simple CLI utility to bulk-create issues in ZenHub - The tool should be able to use the ZenHub API to create issues in all our operator repositories and
- set their title
- set their description
- set labels
- set the sprint
- set the epic
I'm not even sure if the API allows all of that. | priority | write a simple cli utility to bulk create issues in zenhub the tool should be able to use the zenhub api to create issues in all our operator repositories and set their title set their description set labels set the sprint set the epic i m not even sure if the api allows all of that | 1 |
451,403 | 13,034,906,095 | IssuesEvent | 2020-07-28 09:28:35 | ariadne-cps/ariadne | https://api.github.com/repos/ariadne-cps/ariadne | opened | Change loop indices from Nat to SizeType | cleanup priority:low | We should consistently use `SizeType` (or `DimensionType` or `DegreeType` for loop indices, and not `Nat`.
Note that the use of `Nat` should be reserved for builtin integers used in (real number) arithmetic, and not as counters/indices. | 1.0 | Change loop indices from Nat to SizeType - We should consistently use `SizeType` (or `DimensionType` or `DegreeType` for loop indices, and not `Nat`.
Note that the use of `Nat` should be reserved for builtin integers used in (real number) arithmetic, and not as counters/indices. | priority | change loop indices from nat to sizetype we should consistently use sizetype or dimensiontype or degreetype for loop indices and not nat note that the use of nat should be reserved for builtin integers used in real number arithmetic and not as counters indices | 1 |
511,990 | 14,886,514,379 | IssuesEvent | 2021-01-20 17:02:35 | dermestid/bold-phylodiv-scripts | https://api.github.com/repos/dermestid/bold-phylodiv-scripts | closed | More control over plotting | enhancement lower priority ui | Currently the map plots grids on top of each other for each request, until refresh.
Would be better to have datasets temporarily stored and allow the user to toggle them on and off from display. | 1.0 | More control over plotting - Currently the map plots grids on top of each other for each request, until refresh.
Would be better to have datasets temporarily stored and allow the user to toggle them on and off from display. | priority | more control over plotting currently the map plots grids on top of each other for each request until refresh would be better to have datasets temporarily stored and allow the user to toggle them on and off from display | 1 |
242,042 | 7,836,947,553 | IssuesEvent | 2018-06-18 01:59:03 | tlienart/JuDoc.jl | https://api.github.com/repos/tlienart/JuDoc.jl | opened | Parse CSS as well | brainstorm enhancement low-priority | should allow rudimentary variables in CSS (fill-type). For colours, ratios, etc
* would require tracking of modifications + processing | 1.0 | Parse CSS as well - should allow rudimentary variables in CSS (fill-type). For colours, ratios, etc
* would require tracking of modifications + processing | priority | parse css as well should allow rudimentary variables in css fill type for colours ratios etc would require tracking of modifications processing | 1 |
754,923 | 26,408,629,555 | IssuesEvent | 2023-01-13 10:15:26 | Avaiga/taipy-gui | https://api.github.com/repos/Avaiga/taipy-gui | closed | BUG-Data specification in charts is not homogeneous | Gui: Back-End 💥Malfunction 🟩 Priority: Low | **Description**
If you use an array of dicts in the `data` property in a chart control, on cannot create the array on the fly, in the control definition.
That is:
If you have:
```
d1 = { ... }
d2 = { ... }
d = [d1, d2]
```
Then in the Markdown definition:
```
<|{d}|chart|...|>
```
will work, but
```
<|{[d1,d2]}|chart|...|>
```
will break (with a very cryptic message).
**How to reproduce**
Here is some code that reproduces the problem:
```
from taipy.gui import Gui
x = list(range(10))
y1 = x
y2 = list(reversed(x))
data1 = {
"x": x,
"y": y1
}
data2 = {
"x": x,
"y": y2
}
data = [data1, data2]
properties = {
"x[1]": "0/x",
"y[1]": "0/y",
"x[2]": "1/x",
"y[2]": "1/y",
}
page = """
<|{data}|chart|properties={properties}|>
*|{[data1,data2]}|chart|properties={properties}|*
"""
Gui(page).run(run_browser=False)
```
If you replace the control definition by the second (kinf-of commented out), the application breaks on its first display.
**Expected behavior**
Both control definitions should behave the same way.
| 1.0 | BUG-Data specification in charts is not homogeneous - **Description**
If you use an array of dicts in the `data` property in a chart control, on cannot create the array on the fly, in the control definition.
That is:
If you have:
```
d1 = { ... }
d2 = { ... }
d = [d1, d2]
```
Then in the Markdown definition:
```
<|{d}|chart|...|>
```
will work, but
```
<|{[d1,d2]}|chart|...|>
```
will break (with a very cryptic message).
**How to reproduce**
Here is some code that reproduces the problem:
```
from taipy.gui import Gui
x = list(range(10))
y1 = x
y2 = list(reversed(x))
data1 = {
"x": x,
"y": y1
}
data2 = {
"x": x,
"y": y2
}
data = [data1, data2]
properties = {
"x[1]": "0/x",
"y[1]": "0/y",
"x[2]": "1/x",
"y[2]": "1/y",
}
page = """
<|{data}|chart|properties={properties}|>
*|{[data1,data2]}|chart|properties={properties}|*
"""
Gui(page).run(run_browser=False)
```
If you replace the control definition by the second (kinf-of commented out), the application breaks on its first display.
**Expected behavior**
Both control definitions should behave the same way.
| priority | bug data specification in charts is not homogeneous description if you use an array of dicts in the data property in a chart control on cannot create the array on the fly in the control definition that is if you have d then in the markdown definition will work but will break with a very cryptic message how to reproduce here is some code that reproduces the problem from taipy gui import gui x list range x list reversed x x x y x x y data properties x x y y x x y y page chart properties properties gui page run run browser false if you replace the control definition by the second kinf of commented out the application breaks on its first display expected behavior both control definitions should behave the same way | 1 |
428,200 | 12,404,581,686 | IssuesEvent | 2020-05-21 15:46:44 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | Set up logging for hearing dispositions | Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃 | ## Description
Record when dispositions were set on hearings and what they were
## Acceptance criteria
- [ ] Write query
- [ ] Set up graphs in Metabase
## Background/context/resources
Long-term goal: increase hearing capacity
Short-term goal: allow more hearings to be held during Covid-19 stay-at-home orders
## Technical notes
| 1.0 | Set up logging for hearing dispositions - ## Description
Record when dispositions were set on hearings and what they were
## Acceptance criteria
- [ ] Write query
- [ ] Set up graphs in Metabase
## Background/context/resources
Long-term goal: increase hearing capacity
Short-term goal: allow more hearings to be held during Covid-19 stay-at-home orders
## Technical notes
| priority | set up logging for hearing dispositions description record when dispositions were set on hearings and what they were acceptance criteria write query set up graphs in metabase background context resources long term goal increase hearing capacity short term goal allow more hearings to be held during covid stay at home orders technical notes | 1 |
67,506 | 3,274,648,095 | IssuesEvent | 2015-10-26 12:07:16 | YetiForceCompany/YetiForceCRM | https://api.github.com/repos/YetiForceCompany/YetiForceCRM | closed | Error when saving picklist | Label::Logic Priority::#1 Low Type::Bug | Module: TimeControl, I want add new field I cliced save but get the error

| 1.0 | Error when saving picklist - Module: TimeControl, I want add new field I cliced save but get the error

| priority | error when saving picklist module timecontrol i want add new field i cliced save but get the error | 1 |
744,196 | 25,932,702,048 | IssuesEvent | 2022-12-16 11:24:25 | morpheus65535/bazarr | https://api.github.com/repos/morpheus65535/bazarr | closed | Subtitle search broken after 1.1.3 upgrade | bug priority:low awaiting feedback fixed | **Describe the bug**
Error when trying to search for subtitles.
Job "Search for wanted Series Subtitles (trigger: interval[3:00:00], next run at: 2022-12-05 13:55:56 -03)" raised an exception.
Detail:
`Traceback (most recent call last):
File "/app/bazarr/bin/bazarr/../libs/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 130, in wanted_search_missing_subtitles_series
wanted_download_subtitles(episode[\'sonarrEpisodeId\'])
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 94, in wanted_download_subtitles
_wanted_episode(episode)
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 47, in _wanted_episode
for result in generate_subtitles(path_mappings.path_replace(episode[\'path\']),
File "/app/bazarr/bin/bazarr/subtitles/download.py", line 63, in generate_subtitles
if check_if_still_required and language not in check_missing_languages(path, media_type):
TypeError: argument of type \'NoneType\' is not iterable`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'System'
2. Click on 'Tasks'
3. Click on 'Search for wanted Series Subtitles'
4. See error
**Software (please complete the following information):**
- Bazarr: [1.1.3]
- Radarr version [4.2.4.6635]
- Sonarr version [3.0.9.1549]
- OS: [installed via Docker]
| 1.0 | Subtitle search broken after 1.1.3 upgrade - **Describe the bug**
Error when trying to search for subtitles.
Job "Search for wanted Series Subtitles (trigger: interval[3:00:00], next run at: 2022-12-05 13:55:56 -03)" raised an exception.
Detail:
`Traceback (most recent call last):
File "/app/bazarr/bin/bazarr/../libs/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 130, in wanted_search_missing_subtitles_series
wanted_download_subtitles(episode[\'sonarrEpisodeId\'])
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 94, in wanted_download_subtitles
_wanted_episode(episode)
File "/app/bazarr/bin/bazarr/subtitles/wanted/series.py", line 47, in _wanted_episode
for result in generate_subtitles(path_mappings.path_replace(episode[\'path\']),
File "/app/bazarr/bin/bazarr/subtitles/download.py", line 63, in generate_subtitles
if check_if_still_required and language not in check_missing_languages(path, media_type):
TypeError: argument of type \'NoneType\' is not iterable`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'System'
2. Click on 'Tasks'
3. Click on 'Search for wanted Series Subtitles'
4. See error
**Software (please complete the following information):**
- Bazarr: [1.1.3]
- Radarr version [4.2.4.6635]
- Sonarr version [3.0.9.1549]
- OS: [installed via Docker]
| priority | subtitle search broken after upgrade describe the bug error when trying to search for subtitles job search for wanted series subtitles trigger interval next run at raised an exception detail traceback most recent call last file app bazarr bin bazarr libs apscheduler executors base py line in run job retval job func job args job kwargs file app bazarr bin bazarr subtitles wanted series py line in wanted search missing subtitles series wanted download subtitles episode file app bazarr bin bazarr subtitles wanted series py line in wanted download subtitles wanted episode episode file app bazarr bin bazarr subtitles wanted series py line in wanted episode for result in generate subtitles path mappings path replace episode file app bazarr bin bazarr subtitles download py line in generate subtitles if check if still required and language not in check missing languages path media type typeerror argument of type nonetype is not iterable to reproduce steps to reproduce the behavior go to system click on tasks click on search for wanted series subtitles see error software please complete the following information bazarr radarr version sonarr version os | 1 |
553,505 | 16,373,037,072 | IssuesEvent | 2021-05-15 14:38:09 | blahblahbal/Exxo-Avalon-Issue-Tracker | https://api.github.com/repos/blahblahbal/Exxo-Avalon-Issue-Tracker | closed | Drax tooltip doesn't say it can mine chlorophyte & caesium | enhancement low priority | The pickaxe axe has a second tooltip, "Can mine Chlorophyte and Caesium Ore", the drax does not have this | 1.0 | Drax tooltip doesn't say it can mine chlorophyte & caesium - The pickaxe axe has a second tooltip, "Can mine Chlorophyte and Caesium Ore", the drax does not have this | priority | drax tooltip doesn t say it can mine chlorophyte caesium the pickaxe axe has a second tooltip can mine chlorophyte and caesium ore the drax does not have this | 1 |
164,492 | 6,227,190,845 | IssuesEvent | 2017-07-10 20:12:11 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] delete dialog formatting is off when items are nested | bug Priority: Low | <img width="1081" alt="screen shot 2017-06-22 at 5 00 05 pm" src="https://user-images.githubusercontent.com/169432/27455913-f8a8b984-576c-11e7-97c5-5174e22f1abe.png">
| 1.0 | [studio-ui] delete dialog formatting is off when items are nested - <img width="1081" alt="screen shot 2017-06-22 at 5 00 05 pm" src="https://user-images.githubusercontent.com/169432/27455913-f8a8b984-576c-11e7-97c5-5174e22f1abe.png">
| priority | delete dialog formatting is off when items are nested img width alt screen shot at pm src | 1 |
312,761 | 9,552,863,905 | IssuesEvent | 2019-05-02 17:45:11 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [npc] rotbrain magus -49423- Tirisfal Glades | Fixed Confirmed Fixed in Dev Legacy (wotlk) Low Priority Priority zone 1-20 | https://www.wowhead.com/npc=49423/rotbrain-magus#abilities
The mob is suppose to cast abilities but instead its doing only mdps dmg | 2.0 | [npc] rotbrain magus -49423- Tirisfal Glades - https://www.wowhead.com/npc=49423/rotbrain-magus#abilities
The mob is suppose to cast abilities but instead its doing only mdps dmg | priority | rotbrain magus tirisfal glades the mob is suppose to cast abilities but instead its doing only mdps dmg | 1 |
143,960 | 5,533,380,557 | IssuesEvent | 2017-03-21 13:13:49 | pmem/issues | https://api.github.com/repos/pmem/issues | opened | unit tests: most of remote tests on sockets fail | Exposure: Low OS: Linux Priority: 4 low Type: Bug | Following unit tests fails on sockets provider:
obj_rpmem_basic_integration/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST1 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST2 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST3 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST12 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_constructor/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST1 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST2 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_interrupt/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST1 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST2 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST3 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST4 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST0 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST1 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST2 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST3 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST4 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST5 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST6 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST7 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST9 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST0 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST1 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST2 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST3 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST4 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
Exemplary logs:
```
obj_rpmem_basic_integration/TEST0: SETUP (all/pmem/debug/sockets/GPSPM)
obj_rpmem_basic_integration/TEST0 crashed (signal 6).
Last 30 lines of node_0_rpmemd0.log below (whole file has 704 lines).
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log cq_data_size: 8
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log cq_cnt: 32
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log ep_cnt: 128
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log tx_ctx_cnt: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log rx_ctx_cnt: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_tx_ctx: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_rx_ctx: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_stx_ctx: 0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_srx_ctx: 0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log fi_fabric_attr:
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log name: 192.168.0.0/24
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log prov_name: sockets
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log prov_version: 2.0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log create request response: (status = 0)
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log port: 39159
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log rkey: 0x0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log raddr: 0x7fb429e01000
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log nlanes: 368
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log persist method: General Purpose Server Persistency Method
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log waiting for in-band connection
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log close request
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log error reading from event queue: cannot read error from event queue: Unknown error -11
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log in-band thread failed -- '-1'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log in-band thread failed: Success
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log closing pool
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log pool closed
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log removing 'testset_remote'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log removed 'testset_remote'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log close request response (status = 6)
node_1_err0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_err0.log {obj_basic_integration.c:591 main} obj_rpmem_basic_integration/TEST0: Error: pmemobj_create: testset_local: Remote I/O error
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:203 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_err0.log
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:204 ut_sighandler} obj_rpmem_basic_integration/TEST0: Signal 6, backtrace:
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 0: ./obj_basic_integration() [0x409235]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 1: ./obj_basic_integration() [0x40932a]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f08289bf250]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f08289bf1d7]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f08289c08c8]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 5: ./obj_basic_integration() [0x407c87]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 6: ./obj_basic_integration() [0x406a63]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f08289abb35]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 8: ./obj_basic_integration() [0x4023f9]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:206 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_err0.log
node_1_out0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_out0.log obj_rpmem_basic_integration/TEST0: START: obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_out0.log ./obj_basic_integration testset_local
Last 30 lines of node_1_pmemobj0.log below (whole file has 66 lines).
yes: standard output: Broken pipe
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [file.c:351 util_file_create] path /tmp/remote/dir1/test_obj_rpmem_basic_integration/testfile_local size 8388608 minsize 8388608
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2083 util_replica_create] set 0x11b8ce0 repidx 0 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1871 util_replica_create_local] set 0x11b8ce0 repidx 0 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [mmap_linux.c:145 util_map_hint] len 8388608 req_align 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:374 util_map_part] part 0x11b8c30 addr 0x7f0826c00000 size 8388608 offset 0 flags 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:309 util_map_hdr] part 0x11b8c30 flags 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1592 util_header_create] set 0x11b8ce0 repidx 0 partidx 0 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1994 util_replica_create_local] replica #0 addr 0x7f0826c00000
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2083 util_replica_create] set 0x11b8ce0 repidx 1 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2029 util_replica_create_remote] set 0x11b8ce0 repidx 1 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0 prev_repl_uuid (nil) next_repl_uuid (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1592 util_header_create] set 0x11b8ce0 repidx 1 partidx 0 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2064 util_replica_create_remote] replica #1 addr 0x11be000
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1448 util_poolset_files_remote] set 0x11b8ce0 minsize 8388608 nlanes 0x7ffd67dc6ae4 create 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1327 util_poolset_remote_open] rep 0x11b8d30 repidx 1 minsize 8388608 create 1 pool_addr 0x7f0826c01000 pool_size 8384512 nlanes 0x7ffd67dc6ae4
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <1> [set.c:1348 util_poolset_remote_open] creating remote replica #1 failed
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:427 util_unmap_part] part 0x11b8c30
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:567 util_poolset_close] set 0x11b8ce0 del 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:427 util_unmap_part] part 0x11b8c30
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:158 util_remote_unload]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [dlsym.h:83 util_dlclose] handle 0x11b8de0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:465 util_poolset_free] set 0x1yes: write error
1b8ce0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <2> [obj.c:1083 pmemobj_create] cannot create pool or pool set
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:65 libpmemobj_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [obj.c:209 obj_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:109 util_remote_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [mmap.c:124 util_mmap_fini]
node_1_rpmem0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:244 out_init] pid 70892: program: /tmp/remote/dir1/test_obj_rpmem_basic_integration/obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:246 out_init] librpmem version 1.1
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-372-g2b34dd1
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [librpmem.c:66 librpmem_init]
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:344 rpmem_log_args] create request:
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:345 rpmem_log_args] target: 192.168.0.182
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:346 rpmem_log_args] pool set: testset_remote
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:349 rpmem_log_args] nlanes: 1024
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:210 rpmem_common_init] provider: sockets
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:214 rpmem_common_init] forcing using IPv4
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:232 rpmem_common_init] out-of-band connection established
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:503 rpmem_obc_create] create request message sent
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:513 rpmem_obc_create] create request response received
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:358 rpmem_log_resp] create request response:
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:359 rpmem_log_resp] nlanes: 368
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:360 rpmem_log_resp] port: 39159
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:362 rpmem_log_resp] persist method: General Purpose Server Persistency Method
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:363 rpmem_log_resp] remote addr: 0x7fb429e01000
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem_fip_common.c:189 rpmem_fip_read_eq] error reading from event queue: Connection refused
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem.c:308 rpmem_common_fip_init] establishing in-band connection failed: Connection refused
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:685 rpmem_obc_close] close request message sent
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:695 rpmem_obc_close] close request response received
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem_obc.c:252 rpmem_obc_check_hdr_resp] Fatal error
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [librpmem.c:85 librpmem_fini]
node_1_trace0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {obj_basic_integration.c:577 main} obj_rpmem_basic_integration/TEST0: START: obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_trace0.log ./obj_basic_integration testset_local
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {obj_basic_integration.c:591 main} obj_rpmem_basic_integration/TEST0: Error: pmemobj_create: testset_local: Remote I/O error
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:203 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:204 ut_sighandler} obj_rpmem_basic_integration/TEST0: Signal 6, backtrace:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 0: ./obj_basic_integration() [0x409235]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 1: ./obj_basic_integration() [0x40932a]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f08289bf250]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f08289bf1d7]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f08289c08c8]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 5: ./obj_basic_integration() [0x407c87]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 6: ./obj_basic_integration() [0x406a63]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f08289abb35]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 8: ./obj_basic_integration() [0x4023f9]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:206 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log
obj_rpmem_basic_integration/TEST0: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: obj_rpmem_basic_integration/TEST0 failed, TEST=all FS=pmem BUILD=debug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
make[1]: *** [TEST0] Error 1
```
```
rpmem_fip/TEST1: SETUP (all/none/debug/sockets/GPSPM)
rpmem_fip/TEST1 crashed (signal 6).
node_0_rpmemd1.log below.
yes: standard output: Broken pipe
yes: write error
node_1_err1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_err1.log {rpmem_fip_test.c:331 client_connect} rpmem_fip/TEST1: Error: assertion failure: ret (0xfffffffffffffefd) == 0 (0x0)
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:203 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_err1.log
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:204 ut_sighandler} rpmem_fip/TEST1: Signal 6, backtrace:
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 0: ./rpmem_fip() [0x42123e]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 1: ./rpmem_fip() [0x421333]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 2: /lib64/libc.so.6(+0x35250) [0x7f360ca29250]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f360ca291d7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 4: /lib64/libc.so.6(abort+0x148) [0x7f360ca2a8c8]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 5: ./rpmem_fip() [0x41fbb7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 6: ./rpmem_fip() [0x403fac]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 7: ./rpmem_fip() [0x4034c7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 8: ./rpmem_fip() [0x405625]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 9: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f360ca15b35]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 10: ./rpmem_fip() [0x4032a9]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:206 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_err1.log
node_1_out1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_out1.log rpmem_fip/TEST1: START: rpmem_obc
rpmem_fip/TEST1 node_1_out1.log ./rpmem_fip client_connect 192.168.0.182 sockets GPSPM
node_1_rpmem1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:244 out_init] pid 79349: program: /tmp/remote/dir1/test_rpmem_fip/rpmem_fip
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:246 out_init] rpmem_fip version 0.0
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-372-g2b34dd1
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <3> [mmap.c:92 util_mmap_init]
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [rpmem_fip_common.c:189 rpmem_fip_read_eq] error reading from event queue: Connection refused
node_1_rpmemd1.log below.
yes: standard output: Broken pipe
yes: write error
node_1_trace1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_trace1.log {rpmem_fip_test.c:752 main} rpmem_fip/TEST1: START: rpmem_obc
rpmem_fip/TEST1 node_1_trace1.log ./rpmem_fip client_connect 192.168.0.182 sockets GPSPM
rpmem_fip/TEST1 node_1_trace1.log {rpmem_fip_test.c:331 client_connect} rpmem_fip/TEST1: Error: assertion failure: ret (0xfffffffffffffefd) == 0 (0x0)
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:203 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_trace1.log
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:204 ut_sighandler} rpmem_fip/TEST1: Signal 6, backtrace:
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 0: ./rpmem_fip() [0x42123e]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 1: ./rpmem_fip() [0x421333]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 2: /lib64/libc.so.6(+0x35250) [0x7f360ca29250]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f360ca291d7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 4: /lib64/libc.so.6(abort+0x148) [0x7f360ca2a8c8]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 5: ./rpmem_fip() [0x41fbb7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 6: ./rpmem_fip() [0x403fac]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 7: ./rpmem_fip() [0x4034c7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 8: ./rpmem_fip() [0x405625]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 9: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f360ca15b35]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 10: ./rpmem_fip() [0x4032a9]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:206 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_trace1.log
rpmem_fip/TEST1: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: rpmem_fip/TEST1 failed, TEST=all FS=none BUILD=debug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
make[1]: *** [TEST1] Error 1
```
Steps to reproduce:
```
1. cd nvml && make test
2. cd nvml/src/test
3. make sync-remotes
4. make pcheck-remote -k TEST_TYPE=all
```
Found on 1.2+wtp1-372-g2b34dd1 | 1.0 | unit tests: most of remote tests on sockets fail - Following unit tests fails on sockets provider:
obj_rpmem_basic_integration/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST1 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST2 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST3 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_basic_integration/TEST12 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_constructor/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST1 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_state/TEST2 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
obj_rpmem_heap_interrupt/TEST0 failed, FS=pmem RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST1 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST2 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST3 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_fip/TEST4 failed, FS=none RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST0 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST1 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST2 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST3 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST4 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST5 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST6 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST7 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
rpmem_basic/TEST9 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST0 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST1 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST2 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST3 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
pmempool_sync_remote/TEST4 failed, FS=any RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
Exemplary logs:
```
obj_rpmem_basic_integration/TEST0: SETUP (all/pmem/debug/sockets/GPSPM)
obj_rpmem_basic_integration/TEST0 crashed (signal 6).
Last 30 lines of node_0_rpmemd0.log below (whole file has 704 lines).
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log cq_data_size: 8
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log cq_cnt: 32
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log ep_cnt: 128
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log tx_ctx_cnt: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log rx_ctx_cnt: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_tx_ctx: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_rx_ctx: 16
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_stx_ctx: 0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log max_ep_srx_ctx: 0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log fi_fabric_attr:
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log name: 192.168.0.0/24
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log prov_name: sockets
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log prov_version: 2.0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log create request response: (status = 0)
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log port: 39159
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log rkey: 0x0
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log raddr: 0x7fb429e01000
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log nlanes: 368
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log persist method: General Purpose Server Persistency Method
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log waiting for in-band connection
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log close request
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log error reading from event queue: cannot read error from event queue: Unknown error -11
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log in-band thread failed -- '-1'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log in-band thread failed: Success
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log closing pool
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log pool closed
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log removing 'testset_remote'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log removed 'testset_remote'
obj_rpmem_basic_integration/TEST0 node_0_rpmemd0.log close request response (status = 6)
node_1_err0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_err0.log {obj_basic_integration.c:591 main} obj_rpmem_basic_integration/TEST0: Error: pmemobj_create: testset_local: Remote I/O error
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:203 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_err0.log
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:204 ut_sighandler} obj_rpmem_basic_integration/TEST0: Signal 6, backtrace:
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 0: ./obj_basic_integration() [0x409235]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 1: ./obj_basic_integration() [0x40932a]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f08289bf250]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f08289bf1d7]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f08289c08c8]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 5: ./obj_basic_integration() [0x407c87]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 6: ./obj_basic_integration() [0x406a63]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f08289abb35]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 8: ./obj_basic_integration() [0x4023f9]
obj_rpmem_basic_integration/TEST0 node_1_err0.log {ut_backtrace.c:206 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_err0.log
node_1_out0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_out0.log obj_rpmem_basic_integration/TEST0: START: obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_out0.log ./obj_basic_integration testset_local
Last 30 lines of node_1_pmemobj0.log below (whole file has 66 lines).
yes: standard output: Broken pipe
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [file.c:351 util_file_create] path /tmp/remote/dir1/test_obj_rpmem_basic_integration/testfile_local size 8388608 minsize 8388608
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2083 util_replica_create] set 0x11b8ce0 repidx 0 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1871 util_replica_create_local] set 0x11b8ce0 repidx 0 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [mmap_linux.c:145 util_map_hint] len 8388608 req_align 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:374 util_map_part] part 0x11b8c30 addr 0x7f0826c00000 size 8388608 offset 0 flags 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:309 util_map_hdr] part 0x11b8c30 flags 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1592 util_header_create] set 0x11b8ce0 repidx 0 partidx 0 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1994 util_replica_create_local] replica #0 addr 0x7f0826c00000
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2083 util_replica_create] set 0x11b8ce0 repidx 1 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2029 util_replica_create_remote] set 0x11b8ce0 repidx 1 flags 1 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0 prev_repl_uuid (nil) next_repl_uuid (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1592 util_header_create] set 0x11b8ce0 repidx 1 partidx 0 sig PMEMOBJ major 3 compat 0 incompat 0 ro_comapt 0prev_repl_uuid (nil) next_repl_uuid (nil) arch_flags (nil)
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2064 util_replica_create_remote] replica #1 addr 0x11be000
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1448 util_poolset_files_remote] set 0x11b8ce0 minsize 8388608 nlanes 0x7ffd67dc6ae4 create 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:1327 util_poolset_remote_open] rep 0x11b8d30 repidx 1 minsize 8388608 create 1 pool_addr 0x7f0826c01000 pool_size 8384512 nlanes 0x7ffd67dc6ae4
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <1> [set.c:1348 util_poolset_remote_open] creating remote replica #1 failed
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:427 util_unmap_part] part 0x11b8c30
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:567 util_poolset_close] set 0x11b8ce0 del 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:427 util_unmap_part] part 0x11b8c30
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:2110 util_replica_close] set 0x11b8ce0 repidx 1
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:158 util_remote_unload]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [dlsym.h:83 util_dlclose] handle 0x11b8de0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:465 util_poolset_free] set 0x1yes: write error
1b8ce0
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <2> [obj.c:1083 pmemobj_create] cannot create pool or pool set
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:65 libpmemobj_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [obj.c:209 obj_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [set.c:109 util_remote_fini]
obj_rpmem_basic_integration/TEST0 node_1_pmemobj0.log <libpmemobj>: <3> [mmap.c:124 util_mmap_fini]
node_1_rpmem0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:244 out_init] pid 70892: program: /tmp/remote/dir1/test_obj_rpmem_basic_integration/obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:246 out_init] librpmem version 1.1
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-372-g2b34dd1
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [librpmem.c:66 librpmem_init]
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:344 rpmem_log_args] create request:
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:345 rpmem_log_args] target: 192.168.0.182
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:346 rpmem_log_args] pool set: testset_remote
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:349 rpmem_log_args] nlanes: 1024
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:210 rpmem_common_init] provider: sockets
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:214 rpmem_common_init] forcing using IPv4
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:232 rpmem_common_init] out-of-band connection established
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:503 rpmem_obc_create] create request message sent
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:513 rpmem_obc_create] create request response received
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:358 rpmem_log_resp] create request response:
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:359 rpmem_log_resp] nlanes: 368
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:360 rpmem_log_resp] port: 39159
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:362 rpmem_log_resp] persist method: General Purpose Server Persistency Method
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem.c:363 rpmem_log_resp] remote addr: 0x7fb429e01000
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem_fip_common.c:189 rpmem_fip_read_eq] error reading from event queue: Connection refused
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem.c:308 rpmem_common_fip_init] establishing in-band connection failed: Connection refused
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:685 rpmem_obc_close] close request message sent
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [rpmem_obc.c:695 rpmem_obc_close] close request response received
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <1> [rpmem_obc.c:252 rpmem_obc_check_hdr_resp] Fatal error
obj_rpmem_basic_integration/TEST0 node_1_rpmem0.log <librpmem>: <3> [librpmem.c:85 librpmem_fini]
node_1_trace0.log below.
yes: standard output: Broken pipe
yes: write error
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {obj_basic_integration.c:577 main} obj_rpmem_basic_integration/TEST0: START: obj_basic_integration
obj_rpmem_basic_integration/TEST0 node_1_trace0.log ./obj_basic_integration testset_local
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {obj_basic_integration.c:591 main} obj_rpmem_basic_integration/TEST0: Error: pmemobj_create: testset_local: Remote I/O error
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:203 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:204 ut_sighandler} obj_rpmem_basic_integration/TEST0: Signal 6, backtrace:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 0: ./obj_basic_integration() [0x409235]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 1: ./obj_basic_integration() [0x40932a]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f08289bf250]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f08289bf1d7]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f08289c08c8]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 5: ./obj_basic_integration() [0x407c87]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 6: ./obj_basic_integration() [0x406a63]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f08289abb35]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:154 ut_dump_backtrace} obj_rpmem_basic_integration/TEST0: 8: ./obj_basic_integration() [0x4023f9]
obj_rpmem_basic_integration/TEST0 node_1_trace0.log {ut_backtrace.c:206 ut_sighandler} obj_rpmem_basic_integration/TEST0:
obj_rpmem_basic_integration/TEST0 node_1_trace0.log
obj_rpmem_basic_integration/TEST0: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: obj_rpmem_basic_integration/TEST0 failed, TEST=all FS=pmem BUILD=debug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
make[1]: *** [TEST0] Error 1
```
```
rpmem_fip/TEST1: SETUP (all/none/debug/sockets/GPSPM)
rpmem_fip/TEST1 crashed (signal 6).
node_0_rpmemd1.log below.
yes: standard output: Broken pipe
yes: write error
node_1_err1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_err1.log {rpmem_fip_test.c:331 client_connect} rpmem_fip/TEST1: Error: assertion failure: ret (0xfffffffffffffefd) == 0 (0x0)
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:203 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_err1.log
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:204 ut_sighandler} rpmem_fip/TEST1: Signal 6, backtrace:
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 0: ./rpmem_fip() [0x42123e]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 1: ./rpmem_fip() [0x421333]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 2: /lib64/libc.so.6(+0x35250) [0x7f360ca29250]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f360ca291d7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 4: /lib64/libc.so.6(abort+0x148) [0x7f360ca2a8c8]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 5: ./rpmem_fip() [0x41fbb7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 6: ./rpmem_fip() [0x403fac]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 7: ./rpmem_fip() [0x4034c7]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 8: ./rpmem_fip() [0x405625]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 9: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f360ca15b35]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 10: ./rpmem_fip() [0x4032a9]
rpmem_fip/TEST1 node_1_err1.log {ut_backtrace.c:206 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_err1.log
node_1_out1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_out1.log rpmem_fip/TEST1: START: rpmem_obc
rpmem_fip/TEST1 node_1_out1.log ./rpmem_fip client_connect 192.168.0.182 sockets GPSPM
node_1_rpmem1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:244 out_init] pid 79349: program: /tmp/remote/dir1/test_rpmem_fip/rpmem_fip
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:246 out_init] rpmem_fip version 0.0
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-372-g2b34dd1
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <3> [mmap.c:92 util_mmap_init]
rpmem_fip/TEST1 node_1_rpmem1.log <rpmem_fip>: <1> [rpmem_fip_common.c:189 rpmem_fip_read_eq] error reading from event queue: Connection refused
node_1_rpmemd1.log below.
yes: standard output: Broken pipe
yes: write error
node_1_trace1.log below.
yes: standard output: Broken pipe
yes: write error
rpmem_fip/TEST1 node_1_trace1.log {rpmem_fip_test.c:752 main} rpmem_fip/TEST1: START: rpmem_obc
rpmem_fip/TEST1 node_1_trace1.log ./rpmem_fip client_connect 192.168.0.182 sockets GPSPM
rpmem_fip/TEST1 node_1_trace1.log {rpmem_fip_test.c:331 client_connect} rpmem_fip/TEST1: Error: assertion failure: ret (0xfffffffffffffefd) == 0 (0x0)
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:203 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_trace1.log
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:204 ut_sighandler} rpmem_fip/TEST1: Signal 6, backtrace:
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 0: ./rpmem_fip() [0x42123e]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 1: ./rpmem_fip() [0x421333]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 2: /lib64/libc.so.6(+0x35250) [0x7f360ca29250]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f360ca291d7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 4: /lib64/libc.so.6(abort+0x148) [0x7f360ca2a8c8]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 5: ./rpmem_fip() [0x41fbb7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 6: ./rpmem_fip() [0x403fac]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 7: ./rpmem_fip() [0x4034c7]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 8: ./rpmem_fip() [0x405625]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 9: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f360ca15b35]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:154 ut_dump_backtrace} rpmem_fip/TEST1: 10: ./rpmem_fip() [0x4032a9]
rpmem_fip/TEST1 node_1_trace1.log {ut_backtrace.c:206 ut_sighandler} rpmem_fip/TEST1:
rpmem_fip/TEST1 node_1_trace1.log
rpmem_fip/TEST1: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: rpmem_fip/TEST1 failed, TEST=all FS=none BUILD=debug RPMEM_PROVIDER=sockets RPMEM_PM=GPSPM
make[1]: *** [TEST1] Error 1
```
Steps to reproduce:
```
1. cd nvml && make test
2. cd nvml/src/test
3. make sync-remotes
4. make pcheck-remote -k TEST_TYPE=all
```
Found on 1.2+wtp1-372-g2b34dd1 | priority | unit tests most of remote tests on sockets fail following unit tests fails on sockets provider obj rpmem basic integration failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem basic integration failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem basic integration failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem basic integration failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem basic integration failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem constructor failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem heap state failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem heap state failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem heap state failed fs pmem rpmem provider sockets rpmem pm gpspm obj rpmem heap interrupt failed fs pmem rpmem provider sockets rpmem pm gpspm rpmem fip failed fs none rpmem provider sockets rpmem pm gpspm rpmem fip failed fs none rpmem provider sockets rpmem pm gpspm rpmem fip failed fs none rpmem provider sockets rpmem pm gpspm rpmem fip failed fs none rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm rpmem basic failed fs any rpmem provider sockets rpmem pm gpspm pmempool sync remote failed fs any rpmem provider sockets rpmem pm gpspm pmempool sync remote failed fs any rpmem provider sockets rpmem pm gpspm pmempool sync remote failed fs any rpmem provider sockets rpmem pm gpspm pmempool sync remote failed fs any rpmem provider sockets rpmem pm gpspm pmempool sync remote failed fs any rpmem provider sockets rpmem pm gpspm exemplary logs obj rpmem basic integration setup all pmem debug sockets gpspm obj rpmem basic integration crashed signal last lines of node log below whole file has lines yes standard output broken pipe yes write error obj rpmem basic integration node log cq data size obj rpmem basic integration node log cq cnt obj rpmem basic integration node log ep cnt obj rpmem basic integration node log tx ctx cnt obj rpmem basic integration node log rx ctx cnt obj rpmem basic integration node log max ep tx ctx obj rpmem basic integration node log max ep rx ctx obj rpmem basic integration node log max ep stx ctx obj rpmem basic integration node log max ep srx ctx obj rpmem basic integration node log fi fabric attr obj rpmem basic integration node log name obj rpmem basic integration node log prov name sockets obj rpmem basic integration node log prov version obj rpmem basic integration node log obj rpmem basic integration node log create request response status obj rpmem basic integration node log port obj rpmem basic integration node log rkey obj rpmem basic integration node log raddr obj rpmem basic integration node log nlanes obj rpmem basic integration node log persist method general purpose server persistency method obj rpmem basic integration node log waiting for in band connection obj rpmem basic integration node log close request obj rpmem basic integration node log error reading from event queue cannot read error from event queue unknown error obj rpmem basic integration node log in band thread failed obj rpmem basic integration node log in band thread failed success obj rpmem basic integration node log closing pool obj rpmem basic integration node log pool closed obj rpmem basic integration node log removing testset remote obj rpmem basic integration node log removed testset remote obj rpmem basic integration node log close request response status node log below yes standard output broken pipe yes write error obj rpmem basic integration node log obj basic integration c main obj rpmem basic integration error pmemobj create testset local remote i o error obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration obj rpmem basic integration node log obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration signal backtrace obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so gsignal obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so abort obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so libc start main obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration obj rpmem basic integration node log node log below yes standard output broken pipe yes write error obj rpmem basic integration node log obj rpmem basic integration start obj basic integration obj rpmem basic integration node log obj basic integration testset local last lines of node log below whole file has lines yes standard output broken pipe obj rpmem basic integration node log path tmp remote test obj rpmem basic integration testfile local size minsize obj rpmem basic integration node log set repidx flags sig pmemobj major compat incompat ro comapt repl uuid nil next repl uuid nil arch flags nil obj rpmem basic integration node log set repidx flags sig pmemobj major compat incompat ro comapt repl uuid nil next repl uuid nil arch flags nil obj rpmem basic integration node log len req align obj rpmem basic integration node log part addr size offset flags obj rpmem basic integration node log part flags obj rpmem basic integration node log set repidx partidx sig pmemobj major compat incompat ro comapt repl uuid nil next repl uuid nil arch flags nil obj rpmem basic integration node log replica addr obj rpmem basic integration node log set repidx flags sig pmemobj major compat incompat ro comapt repl uuid nil next repl uuid nil arch flags nil obj rpmem basic integration node log set repidx flags sig pmemobj major compat incompat ro comapt prev repl uuid nil next repl uuid nil obj rpmem basic integration node log set repidx partidx sig pmemobj major compat incompat ro comapt repl uuid nil next repl uuid nil arch flags nil obj rpmem basic integration node log replica addr obj rpmem basic integration node log set minsize nlanes create obj rpmem basic integration node log rep repidx minsize create pool addr pool size nlanes obj rpmem basic integration node log creating remote replica failed obj rpmem basic integration node log set repidx obj rpmem basic integration node log part obj rpmem basic integration node log set repidx obj rpmem basic integration node log set del obj rpmem basic integration node log set repidx obj rpmem basic integration node log part obj rpmem basic integration node log set repidx obj rpmem basic integration node log obj rpmem basic integration node log handle obj rpmem basic integration node log set write error obj rpmem basic integration node log cannot create pool or pool set obj rpmem basic integration node log obj rpmem basic integration node log obj rpmem basic integration node log obj rpmem basic integration node log node log below yes standard output broken pipe yes write error obj rpmem basic integration node log pid program tmp remote test obj rpmem basic integration obj basic integration obj rpmem basic integration node log librpmem version obj rpmem basic integration node log src version srcversion obj rpmem basic integration node log compiled with support for valgrind pmemcheck obj rpmem basic integration node log compiled with support for valgrind helgrind obj rpmem basic integration node log compiled with support for valgrind memcheck obj rpmem basic integration node log compiled with support for valgrind drd obj rpmem basic integration node log obj rpmem basic integration node log create request obj rpmem basic integration node log target obj rpmem basic integration node log pool set testset remote obj rpmem basic integration node log nlanes obj rpmem basic integration node log provider sockets obj rpmem basic integration node log forcing using obj rpmem basic integration node log out of band connection established obj rpmem basic integration node log create request message sent obj rpmem basic integration node log create request response received obj rpmem basic integration node log create request response obj rpmem basic integration node log nlanes obj rpmem basic integration node log port obj rpmem basic integration node log persist method general purpose server persistency method obj rpmem basic integration node log remote addr obj rpmem basic integration node log error reading from event queue connection refused obj rpmem basic integration node log establishing in band connection failed connection refused obj rpmem basic integration node log close request message sent obj rpmem basic integration node log close request response received obj rpmem basic integration node log fatal error obj rpmem basic integration node log node log below yes standard output broken pipe yes write error obj rpmem basic integration node log obj basic integration c main obj rpmem basic integration start obj basic integration obj rpmem basic integration node log obj basic integration testset local obj rpmem basic integration node log obj basic integration c main obj rpmem basic integration error pmemobj create testset local remote i o error obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration obj rpmem basic integration node log obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration signal backtrace obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so gsignal obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so abort obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration libc so libc start main obj rpmem basic integration node log ut backtrace c ut dump backtrace obj rpmem basic integration obj basic integration obj rpmem basic integration node log ut backtrace c ut sighandler obj rpmem basic integration obj rpmem basic integration node log obj rpmem basic integration clean cleaning processes on remote nodes runtests stopping obj rpmem basic integration failed test all fs pmem build debug rpmem provider sockets rpmem pm gpspm make error rpmem fip setup all none debug sockets gpspm rpmem fip crashed signal node log below yes standard output broken pipe yes write error node log below yes standard output broken pipe yes write error rpmem fip node log rpmem fip test c client connect rpmem fip error assertion failure ret rpmem fip node log ut backtrace c ut sighandler rpmem fip rpmem fip node log rpmem fip node log ut backtrace c ut sighandler rpmem fip signal backtrace rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so gsignal rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so abort rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so libc start main rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut sighandler rpmem fip rpmem fip node log node log below yes standard output broken pipe yes write error rpmem fip node log rpmem fip start rpmem obc rpmem fip node log rpmem fip client connect sockets gpspm node log below yes standard output broken pipe yes write error rpmem fip node log pid program tmp remote test rpmem fip rpmem fip rpmem fip node log rpmem fip version rpmem fip node log src version srcversion rpmem fip node log compiled with support for valgrind pmemcheck rpmem fip node log compiled with support for valgrind helgrind rpmem fip node log compiled with support for valgrind memcheck rpmem fip node log compiled with support for valgrind drd rpmem fip node log rpmem fip node log error reading from event queue connection refused node log below yes standard output broken pipe yes write error node log below yes standard output broken pipe yes write error rpmem fip node log rpmem fip test c main rpmem fip start rpmem obc rpmem fip node log rpmem fip client connect sockets gpspm rpmem fip node log rpmem fip test c client connect rpmem fip error assertion failure ret rpmem fip node log ut backtrace c ut sighandler rpmem fip rpmem fip node log rpmem fip node log ut backtrace c ut sighandler rpmem fip signal backtrace rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so gsignal rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so abort rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut dump backtrace rpmem fip libc so libc start main rpmem fip node log ut backtrace c ut dump backtrace rpmem fip rpmem fip rpmem fip node log ut backtrace c ut sighandler rpmem fip rpmem fip node log rpmem fip clean cleaning processes on remote nodes runtests stopping rpmem fip failed test all fs none build debug rpmem provider sockets rpmem pm gpspm make error steps to reproduce cd nvml make test cd nvml src test make sync remotes make pcheck remote k test type all found on | 1 |
459,786 | 13,199,165,353 | IssuesEvent | 2020-08-14 05:01:17 | gardners/surveysystem | https://api.github.com/repos/gardners/surveysystem | closed | backend: revise outaded script and instructions | Priority: LOW backend best practices | a lot has changed and the following items are outated or unusable:
```
SETUP.md
deploy
lighttpd.conf
lighttpd.conf.template
restart-server
runtests
testrun
``` | 1.0 | backend: revise outaded script and instructions - a lot has changed and the following items are outated or unusable:
```
SETUP.md
deploy
lighttpd.conf
lighttpd.conf.template
restart-server
runtests
testrun
``` | priority | backend revise outaded script and instructions a lot has changed and the following items are outated or unusable setup md deploy lighttpd conf lighttpd conf template restart server runtests testrun | 1 |
297,565 | 9,172,193,831 | IssuesEvent | 2019-03-04 05:59:35 | OpenPrinting/openprinting.github.io | https://api.github.com/repos/OpenPrinting/openprinting.github.io | closed | Add content to the 'Downloads' page | content migration difficulty/low help wanted priority/medium | Add content to the `Downloads` page located here: https://openprinting.github.io/downloads/
Depends on: #36 | 1.0 | Add content to the 'Downloads' page - Add content to the `Downloads` page located here: https://openprinting.github.io/downloads/
Depends on: #36 | priority | add content to the downloads page add content to the downloads page located here depends on | 1 |
770,124 | 27,029,592,809 | IssuesEvent | 2023-02-12 02:16:36 | JuLY-LION/sitt | https://api.github.com/repos/JuLY-LION/sitt | closed | Mobs can spawn during the first few seconds of preperation | low priority | The sky is dark enough as to allow mobs to spawn in the first 1-10 seconds of starting. | 1.0 | Mobs can spawn during the first few seconds of preperation - The sky is dark enough as to allow mobs to spawn in the first 1-10 seconds of starting. | priority | mobs can spawn during the first few seconds of preperation the sky is dark enough as to allow mobs to spawn in the first seconds of starting | 1 |
393,779 | 11,624,582,047 | IssuesEvent | 2020-02-27 11:01:27 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Save displays as '...' and stays that way indefinitely | priority: low source: plugin:content-manager status: confirmed type: bug | **Describe the bug**
This is very inconsistent to reproduce but the most likely scenario is to go to a model that has relationships. Go to the relationship and attempt to edit it. Eventually, the `SAVE` button will never show and is the `...` loading indicator.
**Steps to reproduce the behavior**
1. Go to a model with relationships.
2. Click on the relationship to edit.
3. See error
**Expected behavior**
The `Save` button show show up and honestly, probably should not make content editable until that point. Otherwise, users will enter data and all of a sudden lose every change because the `Save` button is not enabled.
**Screenshots**

**System**
- Node.js version: 10.16.0
- NPM version: 6.9.0
- Strapi version: v3.0.0-beta.17.5
- Database: MongoDB
- Operating system: Linux AMI
**Other**
Only thing I can see from console/network when getting this error is this:
`Refused to load the image 'data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==' because it violates the following Content Security Policy directive: "img-src 'self' http:".` | 1.0 | Save displays as '...' and stays that way indefinitely - **Describe the bug**
This is very inconsistent to reproduce but the most likely scenario is to go to a model that has relationships. Go to the relationship and attempt to edit it. Eventually, the `SAVE` button will never show and is the `...` loading indicator.
**Steps to reproduce the behavior**
1. Go to a model with relationships.
2. Click on the relationship to edit.
3. See error
**Expected behavior**
The `Save` button show show up and honestly, probably should not make content editable until that point. Otherwise, users will enter data and all of a sudden lose every change because the `Save` button is not enabled.
**Screenshots**

**System**
- Node.js version: 10.16.0
- NPM version: 6.9.0
- Strapi version: v3.0.0-beta.17.5
- Database: MongoDB
- Operating system: Linux AMI
**Other**
Only thing I can see from console/network when getting this error is this:
`Refused to load the image 'data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==' because it violates the following Content Security Policy directive: "img-src 'self' http:".` | priority | save displays as and stays that way indefinitely describe the bug this is very inconsistent to reproduce but the most likely scenario is to go to a model that has relationships go to the relationship and attempt to edit it eventually the save button will never show and is the loading indicator steps to reproduce the behavior go to a model with relationships click on the relationship to edit see error expected behavior the save button show show up and honestly probably should not make content editable until that point otherwise users will enter data and all of a sudden lose every change because the save button is not enabled screenshots system node js version npm version strapi version beta database mongodb operating system linux ami other only thing i can see from console network when getting this error is this refused to load the image data image gif because it violates the following content security policy directive img src self http | 1 |
270,904 | 8,474,572,071 | IssuesEvent | 2018-10-24 16:30:11 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | undefined variable in _AddJobsHere.php | Fix Proposed Low Priority Resolved: Next Release bug | https://github.com/salesagility/SuiteCRM/blob/15c094c4bca49a4ea75daa37cbc91336552c4dfa/modules/Schedulers/_AddJobsHere.php#L580
$isGroupFolderExists = false;
By moving the line above a few lines up, just before the if it's part of, i.e. just before `if (is_array($newMsgs)) {`, the variable becomes available to the next "if" block
if ($isGroupFolderExists) {
where otherwise it might not be unless the $newMsgs array contains something. | 1.0 | undefined variable in _AddJobsHere.php - https://github.com/salesagility/SuiteCRM/blob/15c094c4bca49a4ea75daa37cbc91336552c4dfa/modules/Schedulers/_AddJobsHere.php#L580
$isGroupFolderExists = false;
By moving the line above a few lines up, just before the if it's part of, i.e. just before `if (is_array($newMsgs)) {`, the variable becomes available to the next "if" block
if ($isGroupFolderExists) {
where otherwise it might not be unless the $newMsgs array contains something. | priority | undefined variable in addjobshere php isgroupfolderexists false by moving the line above a few lines up just before the if it s part of i e just before if is array newmsgs the variable becomes available to the next if block if isgroupfolderexists where otherwise it might not be unless the newmsgs array contains something | 1 |
545,182 | 15,937,882,605 | IssuesEvent | 2021-04-14 12:59:54 | ita-social-projects/horondi_admin | https://api.github.com/repos/ita-social-projects/horondi_admin | closed | (Sp:3)Fix minor issues on add/edit news page | Admin UI bug priority: low | 1. Save button should be active only when we made any changes in the inputs, when we reverse the changes (as it was before changes) Save button should disabled again.
2. After you tried to save news and got the error, the Save button should be active again only after resolving the error. | 1.0 | (Sp:3)Fix minor issues on add/edit news page - 1. Save button should be active only when we made any changes in the inputs, when we reverse the changes (as it was before changes) Save button should disabled again.
2. After you tried to save news and got the error, the Save button should be active again only after resolving the error. | priority | sp fix minor issues on add edit news page save button should be active only when we made any changes in the inputs when we reverse the changes as it was before changes save button should disabled again after you tried to save news and got the error the save button should be active again only after resolving the error | 1 |
537,388 | 15,728,284,650 | IssuesEvent | 2021-03-29 13:39:23 | AY2021S2-CS2103T-W13-3/tp | https://api.github.com/repos/AY2021S2-CS2103T-W13-3/tp | closed | Improve delete to allow deleting multiple contacts at once | priority.Low type.Story | As a user, I can remove multiple selected contacts, so that I can quickly clear contacts I no longer need | 1.0 | Improve delete to allow deleting multiple contacts at once - As a user, I can remove multiple selected contacts, so that I can quickly clear contacts I no longer need | priority | improve delete to allow deleting multiple contacts at once as a user i can remove multiple selected contacts so that i can quickly clear contacts i no longer need | 1 |
591,729 | 17,859,718,854 | IssuesEvent | 2021-09-05 18:35:52 | kristinbranson/APT | https://api.github.com/repos/kristinbranson/APT | closed | "Clear" labels button does not update the labeledposMarked field | lowpriority | If you label a frame, accept the labels, and later clear them using "Clear" button, then the labeledposMarked field has value "1". Although, labeledpos has "Nan"s when you clear the labels. Still, I think the labeledposMarked should have values "0" when user hits clear. | 1.0 | "Clear" labels button does not update the labeledposMarked field - If you label a frame, accept the labels, and later clear them using "Clear" button, then the labeledposMarked field has value "1". Although, labeledpos has "Nan"s when you clear the labels. Still, I think the labeledposMarked should have values "0" when user hits clear. | priority | clear labels button does not update the labeledposmarked field if you label a frame accept the labels and later clear them using clear button then the labeledposmarked field has value although labeledpos has nan s when you clear the labels still i think the labeledposmarked should have values when user hits clear | 1 |
145,236 | 5,561,142,414 | IssuesEvent | 2017-03-24 21:32:03 | wsp93/lighten | https://api.github.com/repos/wsp93/lighten | opened | Refactor | Low Priority | The following classes need to be refactored:
- [ ] CategoryNodeTestAdd
- [ ] CategoryNodeTestRemove
- [ ] CategoryNodeTestRemoveSubtree
- [ ] CategoryNodeTestDeadline | 1.0 | Refactor - The following classes need to be refactored:
- [ ] CategoryNodeTestAdd
- [ ] CategoryNodeTestRemove
- [ ] CategoryNodeTestRemoveSubtree
- [ ] CategoryNodeTestDeadline | priority | refactor the following classes need to be refactored categorynodetestadd categorynodetestremove categorynodetestremovesubtree categorynodetestdeadline | 1 |
587,159 | 17,605,890,862 | IssuesEvent | 2021-08-17 17:00:54 | algorand/py-algorand-sdk | https://api.github.com/repos/algorand/py-algorand-sdk | closed | Travis CL fails to build (says libtool missing) | external contribution High Priority Team Hyper Flow new-bug | ### Subject of the issue
When making a pull request, Travis CL runs automatically. However, it fails to build the docker instance. I looked at the outputs of other people pushing, and they get the same error. So the issue is something with either the Travis configuration or the docker test configuration. Here is a sample error log output:
```
cp -R crypto/libsodium-fork crypto/copies/linux/amd64/libsodium-fork
[91mmake[1]: *** [Makefile:133: crypto/libs/linux/amd64/lib/libsodium.a] Error 1
make: *** [Makefile:22: go-algorand] Error 2
[0mcd crypto/copies/linux/amd64/libsodium-fork && \
./autogen.sh --prefix /opt/indexer/third_party/go-algorand/crypto/libs/linux/amd64 && \
./configure --disable-shared --prefix="/opt/indexer/third_party/go-algorand/crypto/libs/linux/amd64" && \
make && \
make install
libtool is required, but wasn't found on this system
make[1]: Leaving directory '/opt/indexer/third_party/go-algorand'
```
Here is a pastebin with the entire log file:
https://hastebin.com/ogacamerih.properties | 1.0 | Travis CL fails to build (says libtool missing) - ### Subject of the issue
When making a pull request, Travis CL runs automatically. However, it fails to build the docker instance. I looked at the outputs of other people pushing, and they get the same error. So the issue is something with either the Travis configuration or the docker test configuration. Here is a sample error log output:
```
cp -R crypto/libsodium-fork crypto/copies/linux/amd64/libsodium-fork
[91mmake[1]: *** [Makefile:133: crypto/libs/linux/amd64/lib/libsodium.a] Error 1
make: *** [Makefile:22: go-algorand] Error 2
[0mcd crypto/copies/linux/amd64/libsodium-fork && \
./autogen.sh --prefix /opt/indexer/third_party/go-algorand/crypto/libs/linux/amd64 && \
./configure --disable-shared --prefix="/opt/indexer/third_party/go-algorand/crypto/libs/linux/amd64" && \
make && \
make install
libtool is required, but wasn't found on this system
make[1]: Leaving directory '/opt/indexer/third_party/go-algorand'
```
Here is a pastebin with the entire log file:
https://hastebin.com/ogacamerih.properties | priority | travis cl fails to build says libtool missing subject of the issue when making a pull request travis cl runs automatically however it fails to build the docker instance i looked at the outputs of other people pushing and they get the same error so the issue is something with either the travis configuration or the docker test configuration here is a sample error log output cp r crypto libsodium fork crypto copies linux libsodium fork error make error crypto copies linux libsodium fork autogen sh prefix opt indexer third party go algorand crypto libs linux configure disable shared prefix opt indexer third party go algorand crypto libs linux make make install libtool is required but wasn t found on this system make leaving directory opt indexer third party go algorand here is a pastebin with the entire log file | 1 |
587,629 | 17,627,309,281 | IssuesEvent | 2021-08-19 00:28:22 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | enc424j600 driver unusable/broken on stm32l552 | bug priority: low Stale | **Describe the bug**
Testing with the dumb_http_server_mt sample with an STM32 nucleo board and enc424j600 ethernet results in an unusable application which outputs lots of error
**To Reproduce**
Steps to reproduce the behavior:
1. Configure ethernet for nucleo_l552ze_q with following:
```
&spi1 {
pinctrl-0 = <&spi1_nss_pa4 &spi1_sck_pa5
&spi1_miso_pa6 &spi1_mosi_pa7>;
status = "okay";
cs-gpios = <&gpiod 14 GPIO_ACTIVE_LOW>;
enc424j600@0 {
compatible = "microchip,enc424j600";
reg = <0>;
spi-max-frequency = <4000000>;
label = "ETHERNET";
int-gpios = <&gpiod 15 GPIO_ACTIVE_LOW>;
};
};
```
2. Build dumb_http_server_mt
3. Flash to board
4. Attempt to ping or connect to board and a multitude of errors is emitted
**Expected behavior**
Connectivity to work
**Impact**
Network is useless
**Logs and console output**
```
*** Booting Zephyr OS build v2.6.0-rc1-300-g6ce0f2ee6606 ***
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init: EIE: 0x0850
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init_filters: ERXFCON: 0x005b
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init_phy: PHANA: 0x05e1
[00:00:00.004,000] <dbg> ethdrv.enc424j600_init_phy: PHCON1: 0x1200
[00:00:00.004,000] <dbg> ethdrv.enc424j600_init: ECON1: 0x0001
[00:00:00.004,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xda00
[00:00:00.004,000] <inf> ethdrv: Link down
[00:00:00.004,000] <inf> ethdrv: ENC424J600 Initialized
[00:00:01.561,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf00
[00:00:01.561,000] <inf> ethdrv: Link up
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: PHANLPA: 0x85e1
[00:00:01.562,000] <inf> ethdrv: 100Mbps
[00:00:01.562,000] <inf> ethdrv: full duplex
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: MACON2: 0x40b3
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: MAMXFL (maximum frame length): 1518
[00:00:01.562,000] <inf> ethdrv: Not suspended
[00:00:01.562,000] <inf> net_dumb_http_srv_mt_sample: Network connected
[00:00:01.563,000] <dbg> net_dumb_http_srv_mt_sample.process_tcp4: Waiting for IPv4 HTTP connections on port 8080, sock 0
uart:~$ net iface
Interface 0x200016dc (Ethernet) [1]
===================================
Link addr : 68:27:19:EF:56:10
MTU : 1500
Flags : NO_AUTO_START,IPv4
Ethernet capabilities supported:
10 Mbits
100 Mbits
IPv4 unicast addresses (max 1):
192.168.1.55 manual preferred infinite
IPv4 multicast addresses (max 1):
<none>
IPv4 gateway : 0.0.0.0
IPv4 netmask : 255.255.255.0
[00:00:09.547,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x3000
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x3008 now
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: npp 0x4800, length 16432, status 0x6003c000
[00:00:09.548,000] <err> ethdrv: Maximum frame length exceeded
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:10.259,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:10.259,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x4800
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x4808 now
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx: npp 0xd100, length 40739, status 0xe257ca55
[00:00:10.260,000] <err> ethdrv: Maximum frame length exceeded
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0xd100
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0xd108 now
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: npp 0xa900, length 43966, status 0x8b581d15
[00:00:11.258,000] <err> ethdrv: Maximum frame length exceeded
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0xa900
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0xa908 now
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: npp 0x9e00, length 42698, status 0xd774e5f4
[00:00:12.273,000] <err> ethdrv: Maximum frame length exceeded
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x9e00
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x9e08 now
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: npp 0x7c00, length 4449, status 0xc0d460d3
[00:00:13.241,000] <err> ethdrv: Maximum frame length exceeded
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:14.242,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x7c00
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x7c08 now
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: npp 0x3400, length 17654, status 0x537f51c5
[00:00:14.243,000] <err> ethdrv: Maximum frame length exceeded
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:19.899,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:19.899,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x3400
--- 3 messages dropped ---
```
**Environment (please complete the following information):**
- OS: Windows
- Toolchain: GNU Arm Embedded Toolchain 9-2020-q2-update 9.3.1
- Commit SHA: 6ce0f2ee6606915c75e17753e34db71cb053c119
| 1.0 | enc424j600 driver unusable/broken on stm32l552 - **Describe the bug**
Testing with the dumb_http_server_mt sample with an STM32 nucleo board and enc424j600 ethernet results in an unusable application which outputs lots of error
**To Reproduce**
Steps to reproduce the behavior:
1. Configure ethernet for nucleo_l552ze_q with following:
```
&spi1 {
pinctrl-0 = <&spi1_nss_pa4 &spi1_sck_pa5
&spi1_miso_pa6 &spi1_mosi_pa7>;
status = "okay";
cs-gpios = <&gpiod 14 GPIO_ACTIVE_LOW>;
enc424j600@0 {
compatible = "microchip,enc424j600";
reg = <0>;
spi-max-frequency = <4000000>;
label = "ETHERNET";
int-gpios = <&gpiod 15 GPIO_ACTIVE_LOW>;
};
};
```
2. Build dumb_http_server_mt
3. Flash to board
4. Attempt to ping or connect to board and a multitude of errors is emitted
**Expected behavior**
Connectivity to work
**Impact**
Network is useless
**Logs and console output**
```
*** Booting Zephyr OS build v2.6.0-rc1-300-g6ce0f2ee6606 ***
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init: EIE: 0x0850
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init_filters: ERXFCON: 0x005b
[00:00:00.003,000] <dbg> ethdrv.enc424j600_init_phy: PHANA: 0x05e1
[00:00:00.004,000] <dbg> ethdrv.enc424j600_init_phy: PHCON1: 0x1200
[00:00:00.004,000] <dbg> ethdrv.enc424j600_init: ECON1: 0x0001
[00:00:00.004,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xda00
[00:00:00.004,000] <inf> ethdrv: Link down
[00:00:00.004,000] <inf> ethdrv: ENC424J600 Initialized
[00:00:01.561,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf00
[00:00:01.561,000] <inf> ethdrv: Link up
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: PHANLPA: 0x85e1
[00:00:01.562,000] <inf> ethdrv: 100Mbps
[00:00:01.562,000] <inf> ethdrv: full duplex
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: MACON2: 0x40b3
[00:00:01.562,000] <dbg> ethdrv.enc424j600_setup_mac: MAMXFL (maximum frame length): 1518
[00:00:01.562,000] <inf> ethdrv: Not suspended
[00:00:01.562,000] <inf> net_dumb_http_srv_mt_sample: Network connected
[00:00:01.563,000] <dbg> net_dumb_http_srv_mt_sample.process_tcp4: Waiting for IPv4 HTTP connections on port 8080, sock 0
uart:~$ net iface
Interface 0x200016dc (Ethernet) [1]
===================================
Link addr : 68:27:19:EF:56:10
MTU : 1500
Flags : NO_AUTO_START,IPv4
Ethernet capabilities supported:
10 Mbits
100 Mbits
IPv4 unicast addresses (max 1):
192.168.1.55 manual preferred infinite
IPv4 multicast addresses (max 1):
<none>
IPv4 gateway : 0.0.0.0
IPv4 netmask : 255.255.255.0
[00:00:09.547,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x3000
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x3008 now
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx: npp 0x4800, length 16432, status 0x6003c000
[00:00:09.548,000] <err> ethdrv: Maximum frame length exceeded
[00:00:09.548,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:10.259,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:10.259,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x4800
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x4808 now
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx: npp 0xd100, length 40739, status 0xe257ca55
[00:00:10.260,000] <err> ethdrv: Maximum frame length exceeded
[00:00:10.260,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0xd100
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0xd108 now
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx: npp 0xa900, length 43966, status 0x8b581d15
[00:00:11.258,000] <err> ethdrv: Maximum frame length exceeded
[00:00:11.258,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0xa900
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0xa908 now
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx: npp 0x9e00, length 42698, status 0xd774e5f4
[00:00:12.273,000] <err> ethdrv: Maximum frame length exceeded
[00:00:12.273,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x9e00
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x9e08 now
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx: npp 0x7c00, length 4449, status 0xc0d460d3
[00:00:13.241,000] <err> ethdrv: Maximum frame length exceeded
[00:00:13.241,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:14.242,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x7c00
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: ERXRDPT is 0x7c08 now
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx: npp 0x3400, length 17654, status 0x537f51c5
[00:00:14.243,000] <err> ethdrv: Maximum frame length exceeded
[00:00:14.243,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0x5f00
[00:00:19.899,000] <dbg> ethdrv.enc424j600_rx_thread: ESTAT: 0xdf01
[00:00:19.899,000] <dbg> ethdrv.enc424j600_rx: set ERXRDPT to 0x3400
--- 3 messages dropped ---
```
**Environment (please complete the following information):**
- OS: Windows
- Toolchain: GNU Arm Embedded Toolchain 9-2020-q2-update 9.3.1
- Commit SHA: 6ce0f2ee6606915c75e17753e34db71cb053c119
| priority | driver unusable broken on describe the bug testing with the dumb http server mt sample with an nucleo board and ethernet results in an unusable application which outputs lots of error to reproduce steps to reproduce the behavior configure ethernet for nucleo q with following pinctrl nss sck miso mosi status okay cs gpios compatible microchip reg spi max frequency label ethernet int gpios build dumb http server mt flash to board attempt to ping or connect to board and a multitude of errors is emitted expected behavior connectivity to work impact network is useless logs and console output booting zephyr os build ethdrv init eie ethdrv init filters erxfcon ethdrv init phy phana ethdrv init phy ethdrv init ethdrv rx thread estat ethdrv link down ethdrv initialized ethdrv rx thread estat ethdrv link up ethdrv setup mac phanlpa ethdrv ethdrv full duplex ethdrv setup mac ethdrv setup mac mamxfl maximum frame length ethdrv not suspended net dumb http srv mt sample network connected net dumb http srv mt sample process waiting for http connections on port sock uart net iface interface ethernet link addr ef mtu flags no auto start ethernet capabilities supported mbits mbits unicast addresses max manual preferred infinite multicast addresses max gateway netmask ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to ethdrv rx erxrdpt is now ethdrv rx npp length status ethdrv maximum frame length exceeded ethdrv rx thread estat ethdrv rx thread estat ethdrv rx set erxrdpt to messages dropped environment please complete the following information os windows toolchain gnu arm embedded toolchain update commit sha | 1 |
800,806 | 28,433,125,808 | IssuesEvent | 2023-04-15 02:05:20 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Game Crashes to Desktop on specific date | priority low :grey_exclamation: bug :bug: | <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**

**What expansions do you have installed?**

**Are you using any submods/mods? If so, which?**
'Go away NOT my Holy Order' and 'Better AI Education' from the steam workshop
**Please explain your issue in as much detail as possible:**
Game repeatedly crashes to desktop on August 19th, 839
**Steps to reproduce the issue:**
Playing the game
**Upload an attachment below: .zip of your save, or screenshots:**
[Save File](https://drive.google.com/file/d/17Q6dA0D846ZttO4AXvNZeiExamBnearn/view?usp=sharing) | 1.0 | Game Crashes to Desktop on specific date - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**

**What expansions do you have installed?**

**Are you using any submods/mods? If so, which?**
'Go away NOT my Holy Order' and 'Better AI Education' from the steam workshop
**Please explain your issue in as much detail as possible:**
Game repeatedly crashes to desktop on August 19th, 839
**Steps to reproduce the issue:**
Playing the game
**Upload an attachment below: .zip of your save, or screenshots:**
[Save File](https://drive.google.com/file/d/17Q6dA0D846ZttO4AXvNZeiExamBnearn/view?usp=sharing) | priority | game crashes to desktop on specific date do not remove pre existing lines your mod version is what expansions do you have installed are you using any submods mods if so which go away not my holy order and better ai education from the steam workshop please explain your issue in as much detail as possible game repeatedly crashes to desktop on august steps to reproduce the issue playing the game upload an attachment below zip of your save or screenshots | 1 |
127,291 | 5,028,179,225 | IssuesEvent | 2016-12-15 17:26:33 | linux-audit/audit-testsuite | https://api.github.com/repos/linux-audit/audit-testsuite | opened | Q: what to do with stress tests and potentially destructive tests? | priority/low question | We don't want to include stress tests and/or potentially destructive tests in the standard audit-testsuite, but this doesn't mean we don't want to track these tests for occasional, manual testing. This issue has been created to discuss how best to handle these tests. | 1.0 | Q: what to do with stress tests and potentially destructive tests? - We don't want to include stress tests and/or potentially destructive tests in the standard audit-testsuite, but this doesn't mean we don't want to track these tests for occasional, manual testing. This issue has been created to discuss how best to handle these tests. | priority | q what to do with stress tests and potentially destructive tests we don t want to include stress tests and or potentially destructive tests in the standard audit testsuite but this doesn t mean we don t want to track these tests for occasional manual testing this issue has been created to discuss how best to handle these tests | 1 |
755,327 | 26,425,108,339 | IssuesEvent | 2023-01-14 03:49:11 | yogstation13/Yogstation | https://api.github.com/repos/yogstation13/Yogstation | closed | You can't light other people's cigarrettes with things that aren't lighters | Bug Issue - Confirmed Issue - Low priority | ## Reproduction:
Try to light someone else's cigarrette with a welding tool or laser
Cry when you just shoot them or hit them with it instead | 1.0 | You can't light other people's cigarrettes with things that aren't lighters - ## Reproduction:
Try to light someone else's cigarrette with a welding tool or laser
Cry when you just shoot them or hit them with it instead | priority | you can t light other people s cigarrettes with things that aren t lighters reproduction try to light someone else s cigarrette with a welding tool or laser cry when you just shoot them or hit them with it instead | 1 |
738,020 | 25,541,928,603 | IssuesEvent | 2022-11-29 15:53:13 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | opened | [Feature Request]: User needs to log in first before Admin can spawn for them | kind/enhancement feature/notebook-controller priority/low | ### Feature description
When trying to start up a notebook for another use in the KFNBC, you end up having a "at start time" error if the user had not already logged into the cluster.
This prevents it from starting.
We should look to prompt early that the user is not available and disable the flow for that user in the table.
### Describe alternatives you've considered
Alternative would maybe we look into starting it anyways for them. Could be useful for a pre-class setup or something?
I worry about wasting resources on starting a Notebook for a user that has not and maybe never will log into the cluster.
cc @kywalker-rh thoughts?
### Anything else?
_No response_ | 1.0 | [Feature Request]: User needs to log in first before Admin can spawn for them - ### Feature description
When trying to start up a notebook for another use in the KFNBC, you end up having a "at start time" error if the user had not already logged into the cluster.
This prevents it from starting.
We should look to prompt early that the user is not available and disable the flow for that user in the table.
### Describe alternatives you've considered
Alternative would maybe we look into starting it anyways for them. Could be useful for a pre-class setup or something?
I worry about wasting resources on starting a Notebook for a user that has not and maybe never will log into the cluster.
cc @kywalker-rh thoughts?
### Anything else?
_No response_ | priority | user needs to log in first before admin can spawn for them feature description when trying to start up a notebook for another use in the kfnbc you end up having a at start time error if the user had not already logged into the cluster this prevents it from starting we should look to prompt early that the user is not available and disable the flow for that user in the table describe alternatives you ve considered alternative would maybe we look into starting it anyways for them could be useful for a pre class setup or something i worry about wasting resources on starting a notebook for a user that has not and maybe never will log into the cluster cc kywalker rh thoughts anything else no response | 1 |
87,740 | 3,757,458,515 | IssuesEvent | 2016-03-14 00:09:44 | squiggle-lang/squiggle-lang | https://api.github.com/repos/squiggle-lang/squiggle-lang | closed | Say "at least N args" for slurpy functions | enhancement help wanted low priority | Slurpy functions should say "expected AT LEAST (n) args" for arity n functions that also have slurpy args. | 1.0 | Say "at least N args" for slurpy functions - Slurpy functions should say "expected AT LEAST (n) args" for arity n functions that also have slurpy args. | priority | say at least n args for slurpy functions slurpy functions should say expected at least n args for arity n functions that also have slurpy args | 1 |
222,309 | 7,431,497,167 | IssuesEvent | 2018-03-25 15:17:06 | ropensci/rrricanes | https://api.github.com/repos/ropensci/rrricanes | closed | Correct repository tags link in README | Low Priority | Under **Versioning**, URL to "tags in this repository" points to old repo. | 1.0 | Correct repository tags link in README - Under **Versioning**, URL to "tags in this repository" points to old repo. | priority | correct repository tags link in readme under versioning url to tags in this repository points to old repo | 1 |
250,756 | 7,987,272,700 | IssuesEvent | 2018-07-19 07:09:25 | gluster/glusterd2 | https://api.github.com/repos/gluster/glusterd2 | opened | config: Change default logdir and rundir | FW: Logging easyfix priority: low usability | On RPM install, this is the typical config that gets installed:
```toml
localstatedir = "/var/lib/glusterd2"
logdir = "/var/log/glusterd2"
logfile = "glusterd2.log"
loglevel = "INFO"
rundir = "/var/run/glusterd2"
defaultpeerport = "24008"
peeraddress = ":24008"
clientaddress = ":24007"
```
The `logdir` should be `/var/log/glusterfs` and `rundir` should default to `/var/run/gluster` | 1.0 | config: Change default logdir and rundir - On RPM install, this is the typical config that gets installed:
```toml
localstatedir = "/var/lib/glusterd2"
logdir = "/var/log/glusterd2"
logfile = "glusterd2.log"
loglevel = "INFO"
rundir = "/var/run/glusterd2"
defaultpeerport = "24008"
peeraddress = ":24008"
clientaddress = ":24007"
```
The `logdir` should be `/var/log/glusterfs` and `rundir` should default to `/var/run/gluster` | priority | config change default logdir and rundir on rpm install this is the typical config that gets installed toml localstatedir var lib logdir var log logfile log loglevel info rundir var run defaultpeerport peeraddress clientaddress the logdir should be var log glusterfs and rundir should default to var run gluster | 1 |
815,778 | 30,571,104,361 | IssuesEvent | 2023-07-20 22:21:02 | aws/aws-application-networking-k8s | https://api.github.com/repos/aws/aws-application-networking-k8s | closed | Enhance DNS name discovery | enhancement low priority | Today, the DNS name is in `message`
```
kubectl get httproute -o yaml
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
...
status:
parents:
- conditions:
- lastTransitionTime: "2022-11-07T21:32:40Z"
message: 'DNS Name: oct25-parking-default-022170a62d6205d58.7d67968.vpc-service-network-svcs.us-west-2.amazonaws.com'
reason: Reconciled
status: "True"
type: httproute
controllerName: application-networking.k8s.aws/gateway-api-controller
parentRef:
group: gateway.networking.k8s.io
kind: Gateway
name: oct25-my-hotel
...
```
This should be enhanced
* use annotation
* or part of "HOSTNAMES"
```
kubectl get httproute
NAME HOSTNAMES AGE
oct25-parking 4m43s
```
| 1.0 | Enhance DNS name discovery - Today, the DNS name is in `message`
```
kubectl get httproute -o yaml
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1alpha2
kind: HTTPRoute
...
status:
parents:
- conditions:
- lastTransitionTime: "2022-11-07T21:32:40Z"
message: 'DNS Name: oct25-parking-default-022170a62d6205d58.7d67968.vpc-service-network-svcs.us-west-2.amazonaws.com'
reason: Reconciled
status: "True"
type: httproute
controllerName: application-networking.k8s.aws/gateway-api-controller
parentRef:
group: gateway.networking.k8s.io
kind: Gateway
name: oct25-my-hotel
...
```
This should be enhanced
* use annotation
* or part of "HOSTNAMES"
```
kubectl get httproute
NAME HOSTNAMES AGE
oct25-parking 4m43s
```
| priority | enhance dns name discovery today the dns name is in message kubectl get httproute o yaml apiversion items apiversion gateway networking io kind httproute status parents conditions lasttransitiontime message dns name parking default vpc service network svcs us west amazonaws com reason reconciled status true type httproute controllername application networking aws gateway api controller parentref group gateway networking io kind gateway name my hotel this should be enhanced use annotation or part of hostnames kubectl get httproute name hostnames age parking | 1 |
734,046 | 25,336,603,605 | IssuesEvent | 2022-11-18 17:22:40 | GlobalPathogenAnalysisService/gpas-cli | https://api.github.com/repos/GlobalPathogenAnalysisService/gpas-cli | closed | When multiple garbage sample names are provided, ValidationError returns auto incremented fictitious sample names | wontfix low-priority | Probably better to ditch the sample_name in these cases, causing the errors to be collapsed into a single error due to redundant errors being pruned?
https://oc-collab.gc3.ocs.oraclecloud.com/browse/C900000008-816 | 1.0 | When multiple garbage sample names are provided, ValidationError returns auto incremented fictitious sample names - Probably better to ditch the sample_name in these cases, causing the errors to be collapsed into a single error due to redundant errors being pruned?
https://oc-collab.gc3.ocs.oraclecloud.com/browse/C900000008-816 | priority | when multiple garbage sample names are provided validationerror returns auto incremented fictitious sample names probably better to ditch the sample name in these cases causing the errors to be collapsed into a single error due to redundant errors being pruned | 1 |
809,279 | 30,184,299,553 | IssuesEvent | 2023-07-04 10:58:55 | Benjamin-Loison/YouTube-operational-API | https://api.github.com/repos/Benjamin-Loison/YouTube-operational-API | opened | Don't use numbers to identify private instances to disable counting/enumerating them | enhancement low priority medium security | Note that the downside is a more complex URL, while we already have some entropy in `instanceKey`, it's debatable to keep `instanceKey` if make the URL more complex. In fact I don't see any advantage to keep `instanceKey` in such a scenario.
Could for instance use `tr -dc a-z0-9 </dev/urandom | head -c 32 ; echo ''` to generate the subdomain name. | 1.0 | Don't use numbers to identify private instances to disable counting/enumerating them - Note that the downside is a more complex URL, while we already have some entropy in `instanceKey`, it's debatable to keep `instanceKey` if make the URL more complex. In fact I don't see any advantage to keep `instanceKey` in such a scenario.
Could for instance use `tr -dc a-z0-9 </dev/urandom | head -c 32 ; echo ''` to generate the subdomain name. | priority | don t use numbers to identify private instances to disable counting enumerating them note that the downside is a more complex url while we already have some entropy in instancekey it s debatable to keep instancekey if make the url more complex in fact i don t see any advantage to keep instancekey in such a scenario could for instance use tr dc a dev urandom head c echo to generate the subdomain name | 1 |
697,527 | 23,942,619,976 | IssuesEvent | 2022-09-12 02:19:16 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] xcursor-mayaserie | request:new-pkg priority:lowest | ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/xcursor-mayaserie-white
https://aur.archlinux.org/packages/xcursor-mayaserie-red
https://aur.archlinux.org/packages/xcursor-mayaserie-orange
https://aur.archlinux.org/packages/xcursor-mayaserie-green
https://aur.archlinux.org/packages/xcursor-mayaserie-blue
https://aur.archlinux.org/packages/xcursor-mayaserie-black
### Utility this package has for you
This cursor looks quite unique - see https://www.tromjaro.com/Maya-Cursor/
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
YES!
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | 1.0 | [Request] xcursor-mayaserie - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/xcursor-mayaserie-white
https://aur.archlinux.org/packages/xcursor-mayaserie-red
https://aur.archlinux.org/packages/xcursor-mayaserie-orange
https://aur.archlinux.org/packages/xcursor-mayaserie-green
https://aur.archlinux.org/packages/xcursor-mayaserie-blue
https://aur.archlinux.org/packages/xcursor-mayaserie-black
### Utility this package has for you
This cursor looks quite unique - see https://www.tromjaro.com/Maya-Cursor/
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
YES!
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | priority | xcursor mayaserie link to the package s in the aur utility this package has for you this cursor looks quite unique see do you consider the package s to be useful for every chaotic aur user yes do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response | 1 |
460,493 | 13,211,030,048 | IssuesEvent | 2020-08-15 20:20:17 | zorkind/Hellion-Rescue-Project | https://api.github.com/repos/zorkind/Hellion-Rescue-Project | closed | Turret sounds too loud when idling | bug cosmetic low priority | **Describe the bug**
Turret sounds too loud when idling
**To Reproduce**
Steps to reproduce the behavior:
1. Go near any turret
2. Listen to the spinning.
**Expected behavior**
Sound of the spinning should be just noticeable.
**Screenshots**
None.
**Additional context**
None.
| 1.0 | Turret sounds too loud when idling - **Describe the bug**
Turret sounds too loud when idling
**To Reproduce**
Steps to reproduce the behavior:
1. Go near any turret
2. Listen to the spinning.
**Expected behavior**
Sound of the spinning should be just noticeable.
**Screenshots**
None.
**Additional context**
None.
| priority | turret sounds too loud when idling describe the bug turret sounds too loud when idling to reproduce steps to reproduce the behavior go near any turret listen to the spinning expected behavior sound of the spinning should be just noticeable screenshots none additional context none | 1 |
271,457 | 8,483,996,041 | IssuesEvent | 2018-10-26 00:02:15 | ClangBuiltLinux/linux | https://api.github.com/repos/ClangBuiltLinux/linux | closed | -Wenum-conversion in drivers/scsi/isci/{request,host}.c | -Wenum-conversion [BUG] linux [PATCH] Accepted low priority | ```
drivers/scsi/isci/request.c:1629:13: warning: implicit conversion from enumeration type
'enum sci_io_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = SCI_IO_FAILURE_RESPONSE_VALID;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/scsi/isci/request.c:1631:12: warning: implicit conversion from enumeration type
'enum sci_io_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = SCI_IO_FAILURE_RESPONSE_VALID;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/scsi/isci/request.c:3476:13: warning: implicit conversion from enumeration type
'enum sci_task_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = sci_controller_start_task(ihost,
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 warnings generated.
drivers/scsi/isci/host.c:2744:10: warning: implicit conversion from enumeration type
'enum sci_status' to different enumeration type 'enum sci_task_status'
[-Wenum-conversion]
return SCI_SUCCESS;
~~~~~~ ^~~~~~~~~~~
drivers/scsi/isci/host.c:2753:9: warning: implicit conversion from enumeration type
'enum sci_status' to different enumeration type 'enum sci_task_status'
[-Wenum-conversion]
return status;
~~~~~~ ^~~~~~
2 warnings generated.
drivers/scsi/iscsi_tcp.c:803:15: warning: implicit conversion from enumeration type
'enum iscsi_host_param' to different enumeration type 'enum iscsi_param'
[-Wenum-conversion]
&addr, param, buf);
^~~~~
1 warning generated.
``` | 1.0 | -Wenum-conversion in drivers/scsi/isci/{request,host}.c - ```
drivers/scsi/isci/request.c:1629:13: warning: implicit conversion from enumeration type
'enum sci_io_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = SCI_IO_FAILURE_RESPONSE_VALID;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/scsi/isci/request.c:1631:12: warning: implicit conversion from enumeration type
'enum sci_io_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = SCI_IO_FAILURE_RESPONSE_VALID;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/scsi/isci/request.c:3476:13: warning: implicit conversion from enumeration type
'enum sci_task_status' to different enumeration type 'enum sci_status'
[-Wenum-conversion]
status = sci_controller_start_task(ihost,
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 warnings generated.
drivers/scsi/isci/host.c:2744:10: warning: implicit conversion from enumeration type
'enum sci_status' to different enumeration type 'enum sci_task_status'
[-Wenum-conversion]
return SCI_SUCCESS;
~~~~~~ ^~~~~~~~~~~
drivers/scsi/isci/host.c:2753:9: warning: implicit conversion from enumeration type
'enum sci_status' to different enumeration type 'enum sci_task_status'
[-Wenum-conversion]
return status;
~~~~~~ ^~~~~~
2 warnings generated.
drivers/scsi/iscsi_tcp.c:803:15: warning: implicit conversion from enumeration type
'enum iscsi_host_param' to different enumeration type 'enum iscsi_param'
[-Wenum-conversion]
&addr, param, buf);
^~~~~
1 warning generated.
``` | priority | wenum conversion in drivers scsi isci request host c drivers scsi isci request c warning implicit conversion from enumeration type enum sci io status to different enumeration type enum sci status status sci io failure response valid drivers scsi isci request c warning implicit conversion from enumeration type enum sci io status to different enumeration type enum sci status status sci io failure response valid drivers scsi isci request c warning implicit conversion from enumeration type enum sci task status to different enumeration type enum sci status status sci controller start task ihost warnings generated drivers scsi isci host c warning implicit conversion from enumeration type enum sci status to different enumeration type enum sci task status return sci success drivers scsi isci host c warning implicit conversion from enumeration type enum sci status to different enumeration type enum sci task status return status warnings generated drivers scsi iscsi tcp c warning implicit conversion from enumeration type enum iscsi host param to different enumeration type enum iscsi param addr param buf warning generated | 1 |
369,244 | 10,894,280,029 | IssuesEvent | 2019-11-19 08:18:41 | kupiqu/SierraBreezeEnhanced | https://api.github.com/repos/kupiqu/SierraBreezeEnhanced | closed | Option to set title bar side | community enhancement low priority | It would be great if there were an option to configure what side the title bar is on, e.g. the left, or the bottom, right, or top (as usual).
KDecoration2 supports this, so it's up to the decoration to add the functionality. | 1.0 | Option to set title bar side - It would be great if there were an option to configure what side the title bar is on, e.g. the left, or the bottom, right, or top (as usual).
KDecoration2 supports this, so it's up to the decoration to add the functionality. | priority | option to set title bar side it would be great if there were an option to configure what side the title bar is on e g the left or the bottom right or top as usual supports this so it s up to the decoration to add the functionality | 1 |
663,953 | 22,216,854,238 | IssuesEvent | 2022-06-08 03:11:47 | emacs-magus/satch.el | https://api.github.com/repos/emacs-magus/satch.el | closed | familiar-with-block, use-package keyword, and/or advice to demote errors | low priority | In my personal config, I add a `--with-demoted-errors` flag to emacs to demote errors for `straight-use-package`.
This can be used, for example, to ensure that if some isolated block of configuration fails, the rest of your configuration still runs. For example, you could add a `:demoted-errors t` keyword to `use-package` or make that a default keyword if running `emacs --with-demoted-errors`. | 1.0 | familiar-with-block, use-package keyword, and/or advice to demote errors - In my personal config, I add a `--with-demoted-errors` flag to emacs to demote errors for `straight-use-package`.
This can be used, for example, to ensure that if some isolated block of configuration fails, the rest of your configuration still runs. For example, you could add a `:demoted-errors t` keyword to `use-package` or make that a default keyword if running `emacs --with-demoted-errors`. | priority | familiar with block use package keyword and or advice to demote errors in my personal config i add a with demoted errors flag to emacs to demote errors for straight use package this can be used for example to ensure that if some isolated block of configuration fails the rest of your configuration still runs for example you could add a demoted errors t keyword to use package or make that a default keyword if running emacs with demoted errors | 1 |
398,816 | 11,742,374,519 | IssuesEvent | 2020-03-12 00:33:14 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Creating an event without specifying time crashes the request | bug priority: low | In GitLab by @se-bastiaan on Mar 14, 2018, 14:36
### One-sentence description
Creating an event without specifying time crashes the request
### Current behaviour
Crash
### Expected behaviour
Nice error message
### Steps to reproduce
1. Create a new event, only specify the _date_ and no _time_ for the start/end of the event.
2. Save | 1.0 | Creating an event without specifying time crashes the request - In GitLab by @se-bastiaan on Mar 14, 2018, 14:36
### One-sentence description
Creating an event without specifying time crashes the request
### Current behaviour
Crash
### Expected behaviour
Nice error message
### Steps to reproduce
1. Create a new event, only specify the _date_ and no _time_ for the start/end of the event.
2. Save | priority | creating an event without specifying time crashes the request in gitlab by se bastiaan on mar one sentence description creating an event without specifying time crashes the request current behaviour crash expected behaviour nice error message steps to reproduce create a new event only specify the date and no time for the start end of the event save | 1 |
210,651 | 7,192,047,250 | IssuesEvent | 2018-02-02 23:55:52 | AdChain/AdChainRegistryDapp | https://api.github.com/repos/AdChain/AdChainRegistryDapp | opened | Modals in Domains Section | Priority: Low Type: UX Enhancement | In the domains section, when users click on the "CHALLENGE", "VOTE", or "REVEAL" buttons, a pop-up modal appears where they can instantly interact with the domain's status. | 1.0 | Modals in Domains Section - In the domains section, when users click on the "CHALLENGE", "VOTE", or "REVEAL" buttons, a pop-up modal appears where they can instantly interact with the domain's status. | priority | modals in domains section in the domains section when users click on the challenge vote or reveal buttons a pop up modal appears where they can instantly interact with the domain s status | 1 |
228,845 | 7,568,329,885 | IssuesEvent | 2018-04-22 18:53:50 | Darkosto/SevTech-Ages | https://api.github.com/repos/Darkosto/SevTech-Ages | closed | Ghast Meat Staging Issue | Category: Staging Priority: Low Status: Completed Type: Bug | ## Issue / Bug
Ghast Meat is in stage 3 even though you usually end up obtaining it at the end of twilight forest progression in Age 2 at the Ur-Ghast Tower when you kill the ghasts

## Possible Solution
Change Staging for Raw Ghast Meat and Cooked Ghast Meat from stage 3 to stage 2
## Context
Not a major bug/issue just a small QOL issue
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Modpack Version: 3.05
* Java Version: 1.8.0.161
* Launcher Used: Twitch Launcher
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
* Memory Allocated: 8192 Mb
<!--- If your using a server please fill the additional information below -->
* Server/LAN/Single Player: Single Player
* Resourcepack Enabled?: Yes
* Optifine Installed?: Yes | 1.0 | Ghast Meat Staging Issue - ## Issue / Bug
Ghast Meat is in stage 3 even though you usually end up obtaining it at the end of twilight forest progression in Age 2 at the Ur-Ghast Tower when you kill the ghasts

## Possible Solution
Change Staging for Raw Ghast Meat and Cooked Ghast Meat from stage 3 to stage 2
## Context
Not a major bug/issue just a small QOL issue
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Modpack Version: 3.05
* Java Version: 1.8.0.161
* Launcher Used: Twitch Launcher
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
* Memory Allocated: 8192 Mb
<!--- If your using a server please fill the additional information below -->
* Server/LAN/Single Player: Single Player
* Resourcepack Enabled?: Yes
* Optifine Installed?: Yes | priority | ghast meat staging issue issue bug ghast meat is in stage even though you usually end up obtaining it at the end of twilight forest progression in age at the ur ghast tower when you kill the ghasts possible solution change staging for raw ghast meat and cooked ghast meat from stage to stage context not a major bug issue just a small qol issue client information modpack version java version launcher used twitch launcher memory allocated mb server lan single player single player resourcepack enabled yes optifine installed yes | 1 |
583,477 | 17,389,928,819 | IssuesEvent | 2021-08-02 05:31:45 | Automattic/abacus | https://api.github.com/repos/Automattic/abacus | closed | Align minimum difference fields in experiment wizard | [!priority] low [section] experiment management [type] enhancement | When helping experimenters edit their experiment, I stumbled on a few potentially easy, quality of life, upgrades to the experiment wizard metric assignment:
- Make input fields same width for both conversion and revenue metrics
- Move units to right of input fields
- Right align input fields (the numbers)
- Left align the units "pp" and "USD"
Here's a before and after (expected):
## Before

## After

## Action items
- [ ] Make the above changes or discuss/ abandon | 1.0 | Align minimum difference fields in experiment wizard - When helping experimenters edit their experiment, I stumbled on a few potentially easy, quality of life, upgrades to the experiment wizard metric assignment:
- Make input fields same width for both conversion and revenue metrics
- Move units to right of input fields
- Right align input fields (the numbers)
- Left align the units "pp" and "USD"
Here's a before and after (expected):
## Before

## After

## Action items
- [ ] Make the above changes or discuss/ abandon | priority | align minimum difference fields in experiment wizard when helping experimenters edit their experiment i stumbled on a few potentially easy quality of life upgrades to the experiment wizard metric assignment make input fields same width for both conversion and revenue metrics move units to right of input fields right align input fields the numbers left align the units pp and usd here s a before and after expected before after action items make the above changes or discuss abandon | 1 |
363,932 | 10,757,023,477 | IssuesEvent | 2019-10-31 12:29:01 | Wipcore/wipcore | https://api.github.com/repos/Wipcore/wipcore | closed | Image list module -> open image on click | low priority | Reuse the modal, no navigation between images is needed to start with. | 1.0 | Image list module -> open image on click - Reuse the modal, no navigation between images is needed to start with. | priority | image list module open image on click reuse the modal no navigation between images is needed to start with | 1 |
250,966 | 7,993,245,174 | IssuesEvent | 2018-07-20 06:49:34 | modxcms/revolution | https://api.github.com/repos/modxcms/revolution | closed | Rewording for Clarity in create/edit TV page | area-i18n/l10n enhancement priority-3-low state/accepting-pull-request | everettg_99 created Redmine issue ID 7292
_Input Option Values_
Currently reads:
<pre>
Option values for TVs with multiple selectable items, such as dropdown or tag (separate options with || ).
</pre>
Might better include fuller examples:
<pre>
Option values for TVs with multiple selectable items, such as dropdown or tag (separate options with ||, e.g. Cat||Dog or White==#000000||Black=#ffffff).
</pre>
_Default Value_
Currently reads:
<pre>
The default value this TV will have if none is specified.
</pre>
Might better read:
<pre>
The default value will be stored if the user does not specify a value.
</pre>
I just don't think it's clear who or what is doing the specifying in the bit "if none is specified". It also might be good to let the user know that the default value is written to the database somehow, i.e. stored.
| 1.0 | Rewording for Clarity in create/edit TV page - everettg_99 created Redmine issue ID 7292
_Input Option Values_
Currently reads:
<pre>
Option values for TVs with multiple selectable items, such as dropdown or tag (separate options with || ).
</pre>
Might better include fuller examples:
<pre>
Option values for TVs with multiple selectable items, such as dropdown or tag (separate options with ||, e.g. Cat||Dog or White==#000000||Black=#ffffff).
</pre>
_Default Value_
Currently reads:
<pre>
The default value this TV will have if none is specified.
</pre>
Might better read:
<pre>
The default value will be stored if the user does not specify a value.
</pre>
I just don't think it's clear who or what is doing the specifying in the bit "if none is specified". It also might be good to let the user know that the default value is written to the database somehow, i.e. stored.
| priority | rewording for clarity in create edit tv page everettg created redmine issue id input option values currently reads option values for tvs with multiple selectable items such as dropdown or tag separate options with might better include fuller examples option values for tvs with multiple selectable items such as dropdown or tag separate options with e g cat dog or white black ffffff default value currently reads the default value this tv will have if none is specified might better read the default value will be stored if the user does not specify a value i just don t think it s clear who or what is doing the specifying in the bit if none is specified it also might be good to let the user know that the default value is written to the database somehow i e stored | 1 |
711,965 | 24,480,775,583 | IssuesEvent | 2022-10-08 20:05:09 | Sub6Resources/flutter_html | https://api.github.com/repos/Sub6Resources/flutter_html | opened | [BUG] List with list-style-position: inside; and block child collapses margins incorrectly | bug low-priority lists | Working on resolving other list issues and documenting an issue that seems good to fix but that I won't devote resources to immediately:
**Describe the bug:**
<!--- Please provide a clear and concise description of the bug --->
See title. Issue is specifically that in a list item with an inline marker box, the margin of a block child collapses to before the marker box, rather than after the marker box.
**HTML to reproduce the issue:**
<!--- Please provide your HTML code below. If it contains sensitive information please post a minimal reproducible HTML snippet. --->
```html
<html>
<head>
<style>
li {
list-style-position: inside;
}
</style>
</head>
<body>
<div>
<ul>
<li>Hello</li>
<li><p>
Line break?
</p></li>
<li>No line break</li>
<li>World!</li>
</ul>
</div>
</body>
</html>
```
**`Html` widget configuration:**
<!--- Please provide your HTML widget configuration below --->
Html(
data: htmlData, //See above
),
**Expected behavior:**
<!--- Expected behavior, if applicable, otherwise please delete --->
<img width="174" alt="Screen Shot 2022-10-08 at 2 01 44 PM" src="https://user-images.githubusercontent.com/19274761/194725856-8e215e09-9559-4b8e-b1d9-9f3b3ec4bd5d.png">
**Actual behavior:**
<!--- Screenshots can be helpful to analyze your issue. Please delete this section if you don't provide any. --->
<img width="174" alt="Screen Shot 2022-10-08 at 2 02 11 PM" src="https://user-images.githubusercontent.com/19274761/194725871-35ee3f03-8be4-4c2f-bf62-637f8399dc29.png">
**Device details and Flutter/Dart/`flutter_html` versions:**
<!--- These details can be helpful to analyze your issue. Please delete this section if you don't provide any. --->
Currently on working branch `fix/lists`. | 1.0 | [BUG] List with list-style-position: inside; and block child collapses margins incorrectly - Working on resolving other list issues and documenting an issue that seems good to fix but that I won't devote resources to immediately:
**Describe the bug:**
<!--- Please provide a clear and concise description of the bug --->
See title. Issue is specifically that in a list item with an inline marker box, the margin of a block child collapses to before the marker box, rather than after the marker box.
**HTML to reproduce the issue:**
<!--- Please provide your HTML code below. If it contains sensitive information please post a minimal reproducible HTML snippet. --->
```html
<html>
<head>
<style>
li {
list-style-position: inside;
}
</style>
</head>
<body>
<div>
<ul>
<li>Hello</li>
<li><p>
Line break?
</p></li>
<li>No line break</li>
<li>World!</li>
</ul>
</div>
</body>
</html>
```
**`Html` widget configuration:**
<!--- Please provide your HTML widget configuration below --->
Html(
data: htmlData, //See above
),
**Expected behavior:**
<!--- Expected behavior, if applicable, otherwise please delete --->
<img width="174" alt="Screen Shot 2022-10-08 at 2 01 44 PM" src="https://user-images.githubusercontent.com/19274761/194725856-8e215e09-9559-4b8e-b1d9-9f3b3ec4bd5d.png">
**Actual behavior:**
<!--- Screenshots can be helpful to analyze your issue. Please delete this section if you don't provide any. --->
<img width="174" alt="Screen Shot 2022-10-08 at 2 02 11 PM" src="https://user-images.githubusercontent.com/19274761/194725871-35ee3f03-8be4-4c2f-bf62-637f8399dc29.png">
**Device details and Flutter/Dart/`flutter_html` versions:**
<!--- These details can be helpful to analyze your issue. Please delete this section if you don't provide any. --->
Currently on working branch `fix/lists`. | priority | list with list style position inside and block child collapses margins incorrectly working on resolving other list issues and documenting an issue that seems good to fix but that i won t devote resources to immediately describe the bug see title issue is specifically that in a list item with an inline marker box the margin of a block child collapses to before the marker box rather than after the marker box html to reproduce the issue html li list style position inside hello line break no line break world html widget configuration html data htmldata see above expected behavior img width alt screen shot at pm src actual behavior img width alt screen shot at pm src device details and flutter dart flutter html versions currently on working branch fix lists | 1 |
640,439 | 20,783,215,624 | IssuesEvent | 2022-03-16 16:28:59 | zeyneplervesarp/swe574-javagang | https://api.github.com/repos/zeyneplervesarp/swe574-javagang | closed | the list of participants should be on the service page - frontend | enhancement frontend low priority difficulty-medium | #15
this is the frontend issue for the participant list requirement | 1.0 | the list of participants should be on the service page - frontend - #15
this is the frontend issue for the participant list requirement | priority | the list of participants should be on the service page frontend this is the frontend issue for the participant list requirement | 1 |
188,353 | 6,775,601,661 | IssuesEvent | 2017-10-27 14:49:03 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | Highlighted Feature Blocks Hovertips | bug-type: confusing priority: low problem: bug | The highlight graphic will not have mouse events, and appears to be blocking the events of the real graphic underneath.
To test - Do a zoom to feature. Mouse over it. No tip. Slightly pan the map to remove the highlight. Mouse over it. Tip.
I believe there is already code/css to force a graphic to allow mouse events to pass through it | 1.0 | Highlighted Feature Blocks Hovertips - The highlight graphic will not have mouse events, and appears to be blocking the events of the real graphic underneath.
To test - Do a zoom to feature. Mouse over it. No tip. Slightly pan the map to remove the highlight. Mouse over it. Tip.
I believe there is already code/css to force a graphic to allow mouse events to pass through it | priority | highlighted feature blocks hovertips the highlight graphic will not have mouse events and appears to be blocking the events of the real graphic underneath to test do a zoom to feature mouse over it no tip slightly pan the map to remove the highlight mouse over it tip i believe there is already code css to force a graphic to allow mouse events to pass through it | 1 |
827,628 | 31,789,015,345 | IssuesEvent | 2023-09-13 00:42:03 | medic/cht-core | https://api.github.com/repos/medic/cht-core | opened | Options with long names look wrong in enketo selects | Type: Bug UI/UX Enketo Priority: 3 - Low | <!--
**Important**: This is a public repository. Anyone in the world can see what's posted here. If you are posting screenshots or log files, please **carefully examine them for** the presence of any kind of **protected health information** (PHI). Images or logs containing PHI _must_ be posted in fully-redacted form, with no visible PHI.
-->
**Describe the bug**
Enketo forms with options have strange padding instead of wrapping as expected.
**To Reproduce**
Steps to reproduce the behavior:
1. Set up an enketo form with a dropdown select with an option of "ELGEYO/MARAKWET"
2. Render the form and view the option
3. See error

**Expected behavior**

**Environment**
- Instance:
- Browser:
- Client platform:
- App: webapp
- Version: 4.0.0+
**Additional context**
The whitespace code is here: https://github.com/medic/cht-core/blob/1251fd0f48e9abd4747797227559dfa33b7b2238/webapp/src/css/enketo/medic.less#L23
This was mentioned on the forum: https://forum.communityhealthtoolkit.org/t/spacing-bug-on-select-questions-with-appearance-minimal/3026 | 1.0 | Options with long names look wrong in enketo selects - <!--
**Important**: This is a public repository. Anyone in the world can see what's posted here. If you are posting screenshots or log files, please **carefully examine them for** the presence of any kind of **protected health information** (PHI). Images or logs containing PHI _must_ be posted in fully-redacted form, with no visible PHI.
-->
**Describe the bug**
Enketo forms with options have strange padding instead of wrapping as expected.
**To Reproduce**
Steps to reproduce the behavior:
1. Set up an enketo form with a dropdown select with an option of "ELGEYO/MARAKWET"
2. Render the form and view the option
3. See error

**Expected behavior**

**Environment**
- Instance:
- Browser:
- Client platform:
- App: webapp
- Version: 4.0.0+
**Additional context**
The whitespace code is here: https://github.com/medic/cht-core/blob/1251fd0f48e9abd4747797227559dfa33b7b2238/webapp/src/css/enketo/medic.less#L23
This was mentioned on the forum: https://forum.communityhealthtoolkit.org/t/spacing-bug-on-select-questions-with-appearance-minimal/3026 | priority | options with long names look wrong in enketo selects important this is a public repository anyone in the world can see what s posted here if you are posting screenshots or log files please carefully examine them for the presence of any kind of protected health information phi images or logs containing phi must be posted in fully redacted form with no visible phi describe the bug enketo forms with options have strange padding instead of wrapping as expected to reproduce steps to reproduce the behavior set up an enketo form with a dropdown select with an option of elgeyo marakwet render the form and view the option see error expected behavior environment instance browser client platform app webapp version additional context the whitespace code is here this was mentioned on the forum | 1 |
351,188 | 10,513,822,352 | IssuesEvent | 2019-09-27 21:44:56 | bootstrap-vue/bootstrap-vue | https://api.github.com/repos/bootstrap-vue/bootstrap-vue | closed | docs: A small suggestion | Priority: Low Type: Docs Type: Feedback | There is no denying that Bootstrap is a good component library for fast building PC and mobile code, and I also like to use Bootstrap. But compared with other VUE component libraries (such as elementUI), Bootstrap VUE documentation is not friendly to developers. It is hoped that the document will be optimized in later iterations. Convenient for more developers to use! | 1.0 | docs: A small suggestion - There is no denying that Bootstrap is a good component library for fast building PC and mobile code, and I also like to use Bootstrap. But compared with other VUE component libraries (such as elementUI), Bootstrap VUE documentation is not friendly to developers. It is hoped that the document will be optimized in later iterations. Convenient for more developers to use! | priority | docs a small suggestion there is no denying that bootstrap is a good component library for fast building pc and mobile code and i also like to use bootstrap but compared with other vue component libraries such as elementui bootstrap vue documentation is not friendly to developers it is hoped that the document will be optimized in later iterations convenient for more developers to use | 1 |
437,431 | 12,597,639,144 | IssuesEvent | 2020-06-11 00:25:43 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | context.Clone() on non-root context gives bad error message | component: system framework priority: low team: dynamics type: bug type: feature request | (Reported by @RussTedrake in [this Slack conversation](https://drakedevelopers.slack.com/archives/C2CHRT98E/p1556372070030300))
Attempting to Clone() a subcontext of a Diagram context gave this message:
```
Failure at systems/framework/dependency_tracker.cc:238 in RepairTrackerPointers():
condition 'map_entry != tracker_map.end()' failed.
```
A more appropriate message would be
```
Only the root of a Diagram Context can be cloned.
```
Per Slack discussion linked above, cloning a subcontext is problematic due to connected input ports and cache dependencies. A workaround is:
```c++
new_context = subsystem.AllocateContext();
new_context.SetTimeStateAndParametersFrom(old_subcontext);
subsystem.FixInputPortsFrom(old_subcontext, new_context);
// new_context is now a root context with its input ports fixed
// at whatever value they had in old_subcontext.
```
`context.Clone()` can't work like the above; a system is required in order to perform input port evaluations. However, consider adding a method like:
```c++
new_context = subsystem.CloneContext(old_subcontext);
```
(Maybe better to call it `CloneContextWithFixedInputPorts()` to be clear.) | 1.0 | context.Clone() on non-root context gives bad error message - (Reported by @RussTedrake in [this Slack conversation](https://drakedevelopers.slack.com/archives/C2CHRT98E/p1556372070030300))
Attempting to Clone() a subcontext of a Diagram context gave this message:
```
Failure at systems/framework/dependency_tracker.cc:238 in RepairTrackerPointers():
condition 'map_entry != tracker_map.end()' failed.
```
A more appropriate message would be
```
Only the root of a Diagram Context can be cloned.
```
Per Slack discussion linked above, cloning a subcontext is problematic due to connected input ports and cache dependencies. A workaround is:
```c++
new_context = subsystem.AllocateContext();
new_context.SetTimeStateAndParametersFrom(old_subcontext);
subsystem.FixInputPortsFrom(old_subcontext, new_context);
// new_context is now a root context with its input ports fixed
// at whatever value they had in old_subcontext.
```
`context.Clone()` can't work like the above; a system is required in order to perform input port evaluations. However, consider adding a method like:
```c++
new_context = subsystem.CloneContext(old_subcontext);
```
(Maybe better to call it `CloneContextWithFixedInputPorts()` to be clear.) | priority | context clone on non root context gives bad error message reported by russtedrake in attempting to clone a subcontext of a diagram context gave this message failure at systems framework dependency tracker cc in repairtrackerpointers condition map entry tracker map end failed a more appropriate message would be only the root of a diagram context can be cloned per slack discussion linked above cloning a subcontext is problematic due to connected input ports and cache dependencies a workaround is c new context subsystem allocatecontext new context settimestateandparametersfrom old subcontext subsystem fixinputportsfrom old subcontext new context new context is now a root context with its input ports fixed at whatever value they had in old subcontext context clone can t work like the above a system is required in order to perform input port evaluations however consider adding a method like c new context subsystem clonecontext old subcontext maybe better to call it clonecontextwithfixedinputports to be clear | 1 |
102,954 | 4,163,333,135 | IssuesEvent | 2016-06-18 01:57:33 | facelessuser/BracketHighlighter | https://api.github.com/repos/facelessuser/BracketHighlighter | closed | Feature Request: Directly jump behind bracket | Enhancement Maybe Priority - Low | I think it would be useful to jump directly behind the matching brackets. Currently it is only possible (?) to jump at the inside of the bracket and afterwards to the outside of the bracket. This requires 2 keystrokes and I think it would be helpful to create an option to reduce this to 1.
To demonstrate, which action I am speaking of:

The executed action is available with `BracketHighlighter: Jump to Right Bracket`. | 1.0 | Feature Request: Directly jump behind bracket - I think it would be useful to jump directly behind the matching brackets. Currently it is only possible (?) to jump at the inside of the bracket and afterwards to the outside of the bracket. This requires 2 keystrokes and I think it would be helpful to create an option to reduce this to 1.
To demonstrate, which action I am speaking of:

The executed action is available with `BracketHighlighter: Jump to Right Bracket`. | priority | feature request directly jump behind bracket i think it would be useful to jump directly behind the matching brackets currently it is only possible to jump at the inside of the bracket and afterwards to the outside of the bracket this requires keystrokes and i think it would be helpful to create an option to reduce this to to demonstrate which action i am speaking of the executed action is available with brackethighlighter jump to right bracket | 1 |
300,318 | 9,206,360,938 | IssuesEvent | 2019-03-08 13:32:32 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | r.out.tiff: does not allow selection of destination directory | Category: GRASS Component: Easy fix? Component: Pull Request or Patch supplied Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Feature request | ---
Author Name: **Paolo Cavallini** (Paolo Cavallini)
Original Redmine Issue: 1628, https://issues.qgis.org/issues/1628
Original Assignee: Lorenzo Masini
---
When exporting a raster to a tiff with r.out.tiff, the user cannot select the destination directory. The tiff gets saved to the home directory, but this should be a choice of the user.
| 1.0 | r.out.tiff: does not allow selection of destination directory - ---
Author Name: **Paolo Cavallini** (Paolo Cavallini)
Original Redmine Issue: 1628, https://issues.qgis.org/issues/1628
Original Assignee: Lorenzo Masini
---
When exporting a raster to a tiff with r.out.tiff, the user cannot select the destination directory. The tiff gets saved to the home directory, but this should be a choice of the user.
| priority | r out tiff does not allow selection of destination directory author name paolo cavallini paolo cavallini original redmine issue original assignee lorenzo masini when exporting a raster to a tiff with r out tiff the user cannot select the destination directory the tiff gets saved to the home directory but this should be a choice of the user | 1 |
195,024 | 6,901,931,193 | IssuesEvent | 2017-11-25 14:26:39 | buzinas/tslint-eslint-rules | https://api.github.com/repos/buzinas/tslint-eslint-rules | closed | Feature request: add computed-property-spacing rule | accepting pr's low priority new rule suggestion | **Versions**
* tslint-eslint-rules: 3.2.3
* tslint: 4.3.1
**Problem**
Config:
```js
module.exports = {
rulesDirectory: 'node_modules/tslint-eslint-rules/dist/rules',
rules: {
'array-bracket-spacing': [true, 'never']
}
};
```
Code
```
Math.floor(arr[ 0]);
```
**Expected behavior**
Get error about extra space:
```js
Math.floor(arr[ 0]);
---------------^
```
**Actual behavior**
No errors.
| 1.0 | Feature request: add computed-property-spacing rule - **Versions**
* tslint-eslint-rules: 3.2.3
* tslint: 4.3.1
**Problem**
Config:
```js
module.exports = {
rulesDirectory: 'node_modules/tslint-eslint-rules/dist/rules',
rules: {
'array-bracket-spacing': [true, 'never']
}
};
```
Code
```
Math.floor(arr[ 0]);
```
**Expected behavior**
Get error about extra space:
```js
Math.floor(arr[ 0]);
---------------^
```
**Actual behavior**
No errors.
| priority | feature request add computed property spacing rule versions tslint eslint rules tslint problem config js module exports rulesdirectory node modules tslint eslint rules dist rules rules array bracket spacing code math floor arr expected behavior get error about extra space js math floor arr actual behavior no errors | 1 |
330,602 | 10,053,209,379 | IssuesEvent | 2019-07-21 14:55:25 | ticket721/contracts | https://api.github.com/repos/ticket721/contracts | closed | d.perf: slow balanceof | [priority] [➖ ] low [status] to do [type] perf | - [ ] Store amount of tickets owned
- [ ] balanceOf returns value instead of traversing | 1.0 | d.perf: slow balanceof - - [ ] Store amount of tickets owned
- [ ] balanceOf returns value instead of traversing | priority | d perf slow balanceof store amount of tickets owned balanceof returns value instead of traversing | 1 |
614,088 | 19,142,097,116 | IssuesEvent | 2021-12-02 00:47:55 | Arsollo/Soen-341-Project | https://api.github.com/repos/Arsollo/Soen-341-Project | closed | Website overall Css needs to be coherent | Layout-Change Second-Priority Low risk/low value | All the HTML pages should have coherent CSS in order for the website to feel more natural | 1.0 | Website overall Css needs to be coherent - All the HTML pages should have coherent CSS in order for the website to feel more natural | priority | website overall css needs to be coherent all the html pages should have coherent css in order for the website to feel more natural | 1 |
717,869 | 24,694,306,713 | IssuesEvent | 2022-10-19 10:53:13 | owncloud/web | https://api.github.com/repos/owncloud/web | closed | Add space between text and button | Priority:p4-low Platform:Web | ### Steps to reproduce
1. open usermanagement
2. edit newly created user before first login

Text and button row look a bit compressed. Add space between | 1.0 | Add space between text and button - ### Steps to reproduce
1. open usermanagement
2. edit newly created user before first login

Text and button row look a bit compressed. Add space between | priority | add space between text and button steps to reproduce open usermanagement edit newly created user before first login text and button row look a bit compressed add space between | 1 |
133,759 | 5,207,817,570 | IssuesEvent | 2017-01-25 01:04:49 | mRemoteNG/mRemoteNG | https://api.github.com/repos/mRemoteNG/mRemoteNG | closed | Tab menu: Change Name - dont show old tab name like older versions | Low Priority ready UI/UX Verified | <!--
Only file GitHub issues for bugs and feature requests. All other topics will be closed.
Before opening an issue, please search for a duplicate or closed issue.
Please provide as much detail as possible for us to fix your issue.
-->
<!-- Bug -->
|||
|--:|---|
|Operating system | Windows 7 x64 |
|mRemoteNG version| 1.75 aplha 3 |
Mouse right buton at tab, select change name.
Older versions show old name at textbox, is a old good feature.
<!-- Feature Request -->
<!-- If you file a feature request, please delete the bug section -->
| 1.0 | Tab menu: Change Name - dont show old tab name like older versions - <!--
Only file GitHub issues for bugs and feature requests. All other topics will be closed.
Before opening an issue, please search for a duplicate or closed issue.
Please provide as much detail as possible for us to fix your issue.
-->
<!-- Bug -->
|||
|--:|---|
|Operating system | Windows 7 x64 |
|mRemoteNG version| 1.75 aplha 3 |
Mouse right buton at tab, select change name.
Older versions show old name at textbox, is a old good feature.
<!-- Feature Request -->
<!-- If you file a feature request, please delete the bug section -->
| priority | tab menu change name dont show old tab name like older versions only file github issues for bugs and feature requests all other topics will be closed before opening an issue please search for a duplicate or closed issue please provide as much detail as possible for us to fix your issue operating system windows mremoteng version aplha mouse right buton at tab select change name older versions show old name at textbox is a old good feature | 1 |
351,012 | 10,511,855,867 | IssuesEvent | 2019-09-27 16:24:00 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | closed | Failed MC outcomes | :beetle: bug :beetle: :grey_exclamation: priority low | **Mod Version**
96ac9e45
**What expansions do you have installed?**
All
**Please explain your issue in as much detail as possible:**
WCCLS.617 don't give lunatic trait
**Steps to reproduce the issue:**
Fail mind control spell
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>


</details> | 1.0 | Failed MC outcomes - **Mod Version**
96ac9e45
**What expansions do you have installed?**
All
**Please explain your issue in as much detail as possible:**
WCCLS.617 don't give lunatic trait
**Steps to reproduce the issue:**
Fail mind control spell
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>


</details> | priority | failed mc outcomes mod version what expansions do you have installed all please explain your issue in as much detail as possible wccls don t give lunatic trait steps to reproduce the issue fail mind control spell upload an attachment below zip of your save or screenshots click to expand | 1 |
519,446 | 15,051,381,163 | IssuesEvent | 2021-02-03 14:02:46 | fasten-project/fasten | https://api.github.com/repos/fasten-project/fasten | closed | API: module's callables endpoint returns 200 even if module doesn't exist | Priority: Low bug good first issue | ## Describe the bug
The endpoint that is meant to return the list of callables for a specific module in the package returns successful response (w/ empty set) even if module doesn't exist for the package.
## To Reproduce
**POST** `https://api.fasten-project.eu/api/mvn/packages/jboss:jbossmq-client/3.2.3/modules/callables`
With body: `abcde` (random string)
-> returns 200
## Expected behavior
The endpoint shall return `404` if module is not found for the package. | 1.0 | API: module's callables endpoint returns 200 even if module doesn't exist - ## Describe the bug
The endpoint that is meant to return the list of callables for a specific module in the package returns successful response (w/ empty set) even if module doesn't exist for the package.
## To Reproduce
**POST** `https://api.fasten-project.eu/api/mvn/packages/jboss:jbossmq-client/3.2.3/modules/callables`
With body: `abcde` (random string)
-> returns 200
## Expected behavior
The endpoint shall return `404` if module is not found for the package. | priority | api module s callables endpoint returns even if module doesn t exist describe the bug the endpoint that is meant to return the list of callables for a specific module in the package returns successful response w empty set even if module doesn t exist for the package to reproduce post with body abcde random string returns expected behavior the endpoint shall return if module is not found for the package | 1 |
512,695 | 14,907,660,755 | IssuesEvent | 2021-01-22 03:45:33 | InfinityGhost/OpenTabletDriver | https://api.github.com/repos/InfinityGhost/OpenTabletDriver | opened | Driver Daemon SetTabletDebug doesn't have an enabled check | bug daemon desktop priority:low | ## Description
<!-- Describe the issue below -->
`SetTabletDebug()` has no check to determine that tablet debugging has already been enabled, allowing for an unintended increase in identical reports.
https://github.com/InfinityGhost/OpenTabletDriver/blob/e7bd0e5641f5abe3cce433dbaa988eb7f560d715/OpenTabletDriver.Daemon/DriverDaemon.cs#L377-L386
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Software Version | fbeb37130e193ccc829e15f6491269b1056f3fdc | 1.0 | Driver Daemon SetTabletDebug doesn't have an enabled check - ## Description
<!-- Describe the issue below -->
`SetTabletDebug()` has no check to determine that tablet debugging has already been enabled, allowing for an unintended increase in identical reports.
https://github.com/InfinityGhost/OpenTabletDriver/blob/e7bd0e5641f5abe3cce433dbaa988eb7f560d715/OpenTabletDriver.Daemon/DriverDaemon.cs#L377-L386
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Software Version | fbeb37130e193ccc829e15f6491269b1056f3fdc | priority | driver daemon settabletdebug doesn t have an enabled check description settabletdebug has no check to determine that tablet debugging has already been enabled allowing for an unintended increase in identical reports system information name value software version | 1 |
400,027 | 11,765,775,679 | IssuesEvent | 2020-03-14 18:58:17 | tlienart/Franklin.jl | https://api.github.com/repos/tlienart/Franklin.jl | opened | Support MathJax | enhancement low-priority | It seems some people would prefer MathJax than KaTeX support.
The key thing is to check that Franklin's parser doesn't get in the way of MathJax's (need to check [`convert_math_block`](https://github.com/tlienart/Franklin.jl/blob/bbec0e5d69c01e924761845445e2596d036439a0/src/converter/markdown/blocks.jl#L70))
Then the pre rendering step needs a modified [`js_prerender_katex`](https://github.com/tlienart/Franklin.jl/blob/bbec0e5d69c01e924761845445e2596d036439a0/src/converter/html/prerender.jl#L7) to use [mathjax-node](https://github.com/mathjax/MathJax-node) (I think).
cc @RoyiAvital | 1.0 | Support MathJax - It seems some people would prefer MathJax than KaTeX support.
The key thing is to check that Franklin's parser doesn't get in the way of MathJax's (need to check [`convert_math_block`](https://github.com/tlienart/Franklin.jl/blob/bbec0e5d69c01e924761845445e2596d036439a0/src/converter/markdown/blocks.jl#L70))
Then the pre rendering step needs a modified [`js_prerender_katex`](https://github.com/tlienart/Franklin.jl/blob/bbec0e5d69c01e924761845445e2596d036439a0/src/converter/html/prerender.jl#L7) to use [mathjax-node](https://github.com/mathjax/MathJax-node) (I think).
cc @RoyiAvital | priority | support mathjax it seems some people would prefer mathjax than katex support the key thing is to check that franklin s parser doesn t get in the way of mathjax s need to check then the pre rendering step needs a modified to use i think cc royiavital | 1 |
722,672 | 24,871,194,423 | IssuesEvent | 2022-10-27 15:20:16 | harvard-lil/perma | https://api.github.com/repos/harvard-lil/perma | opened | Use `iterator()` when iterating through large querysets in Celery tasks | bug housekeeping should-be-small database priority-low | Back when, I found that `iterator()` when used with `values_list` was causing querysets to be evaluated twice: we saw the (at the time expensive and optimized) queries running on the database twice. So, we [removed `iterator()`](https://github.com/harvard-lil/perma/commit/3620c87fa7ba19e5a8038561cfb29a1725a92982).
I can no longer reproduce that problem. Lots of things have changed in the meantime: Django upgrades, a migration from MySQL to Postgres, etc.
Let's put `iterator()` back and thereby use RAM more gently. | 1.0 | Use `iterator()` when iterating through large querysets in Celery tasks - Back when, I found that `iterator()` when used with `values_list` was causing querysets to be evaluated twice: we saw the (at the time expensive and optimized) queries running on the database twice. So, we [removed `iterator()`](https://github.com/harvard-lil/perma/commit/3620c87fa7ba19e5a8038561cfb29a1725a92982).
I can no longer reproduce that problem. Lots of things have changed in the meantime: Django upgrades, a migration from MySQL to Postgres, etc.
Let's put `iterator()` back and thereby use RAM more gently. | priority | use iterator when iterating through large querysets in celery tasks back when i found that iterator when used with values list was causing querysets to be evaluated twice we saw the at the time expensive and optimized queries running on the database twice so we i can no longer reproduce that problem lots of things have changed in the meantime django upgrades a migration from mysql to postgres etc let s put iterator back and thereby use ram more gently | 1 |
416,834 | 12,152,020,267 | IssuesEvent | 2020-04-24 21:13:22 | TykTechnologies/tyk | https://api.github.com/repos/TykTechnologies/tyk | closed | Extend gateway upstream caching to support client and server-side instructions | Priority: Low Ready for development wontfix | Add an enum to `CacheOptions` for canonical headers, that specifies `upstream`, `downstream` and `both` as valid setters of cache instructions. | 1.0 | Extend gateway upstream caching to support client and server-side instructions - Add an enum to `CacheOptions` for canonical headers, that specifies `upstream`, `downstream` and `both` as valid setters of cache instructions. | priority | extend gateway upstream caching to support client and server side instructions add an enum to cacheoptions for canonical headers that specifies upstream downstream and both as valid setters of cache instructions | 1 |
436,017 | 12,544,135,055 | IssuesEvent | 2020-06-05 16:43:04 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | closed | HomeFragment - Tablet (Landscape) (Lowfi) | Priority: Essential Status: Pending verification Type: Task Where: Starting flows Workstream: Lowfi UI | Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020 | 1.0 | HomeFragment - Tablet (Landscape) (Lowfi) - Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020 | priority | homefragment tablet landscape lowfi mocks implement low fi ui for homefragment tablet landscape mode target pr date june target completion date june | 1 |
568,708 | 16,986,791,409 | IssuesEvent | 2021-06-30 15:12:38 | Blackoutburst/Wally | https://api.github.com/repos/Blackoutburst/Wally | opened | Discord name on canvas doesn't support unicode character | bug low priority | **Describe the bug**
Canvas does not support unicode character in discord name
**Screenshots**
If applicable, add screenshots to help explain your problem.
 | 1.0 | Discord name on canvas doesn't support unicode character - **Describe the bug**
Canvas does not support unicode character in discord name
**Screenshots**
If applicable, add screenshots to help explain your problem.
 | priority | discord name on canvas doesn t support unicode character describe the bug canvas does not support unicode character in discord name screenshots if applicable add screenshots to help explain your problem | 1 |
701,875 | 24,112,805,309 | IssuesEvent | 2022-09-20 12:43:55 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Update to latest json.NET version 12 from 10 | Type: Enhancement Alert: Pinned Area: Platform > Library Effort: Low Priority: Medium Status: Ready for Development | <!--
Please read contribution guideline first: https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md
Any potential security issues should be sent to security@dnnsoftware.com, rather than posted on GitHub
-->
## Description of bug
Latest installations of DNN show version 10 from 2017 installed. I feel it should be upgraded to 12 to get latest bug fixes.
https://github.com/JamesNK/Newtonsoft.Json/releases | 1.0 | Update to latest json.NET version 12 from 10 - <!--
Please read contribution guideline first: https://github.com/dnnsoftware/Dnn.Platform/blob/development/CONTRIBUTING.md
Any potential security issues should be sent to security@dnnsoftware.com, rather than posted on GitHub
-->
## Description of bug
Latest installations of DNN show version 10 from 2017 installed. I feel it should be upgraded to 12 to get latest bug fixes.
https://github.com/JamesNK/Newtonsoft.Json/releases | priority | update to latest json net version from please read contribution guideline first any potential security issues should be sent to security dnnsoftware com rather than posted on github description of bug latest installations of dnn show version from installed i feel it should be upgraded to to get latest bug fixes | 1 |
717,069 | 24,659,879,652 | IssuesEvent | 2022-10-18 05:22:24 | appliedAI-Initiative/pyDVL | https://api.github.com/repos/appliedAI-Initiative/pyDVL | closed | Interactive notebooks on Binder and/or Google Colab | enhancement Low Priority | In order to make it easier for people to play around with the library it would be nice to create interactive notebooks with [binder](https://mybinder.org/) and/or [google colab](https://colab.research.google.com/).
This can of course only be done after the repository has been made publicly available.
- [x] Add Binder links
- [ ] Add Colab links | 1.0 | Interactive notebooks on Binder and/or Google Colab - In order to make it easier for people to play around with the library it would be nice to create interactive notebooks with [binder](https://mybinder.org/) and/or [google colab](https://colab.research.google.com/).
This can of course only be done after the repository has been made publicly available.
- [x] Add Binder links
- [ ] Add Colab links | priority | interactive notebooks on binder and or google colab in order to make it easier for people to play around with the library it would be nice to create interactive notebooks with and or this can of course only be done after the repository has been made publicly available add binder links add colab links | 1 |
325,282 | 9,922,004,442 | IssuesEvent | 2019-06-30 23:33:06 | ODIQueensland/data-curator | https://api.github.com/repos/ODIQueensland/data-curator | closed | On prompting for URL, place cursor in prompt | est:Minor f:Feature-request fn:Open-Data priority:Low |
### Desired Behaviour
On open data package zip or json from URL, the cursor is not placed in the data entry box
<img width="398" alt="screenshot 2018-04-27 07 28 27" src="https://user-images.githubusercontent.com/9379524/39333019-ac8f694e-49ec-11e8-93f4-edc6938fd683.png">
It should be
<img width="406" alt="screenshot 2018-04-27 07 30 53" src="https://user-images.githubusercontent.com/9379524/39333090-f0dc544a-49ec-11e8-92d5-e1e8b6976466.png">
| 1.0 | On prompting for URL, place cursor in prompt -
### Desired Behaviour
On open data package zip or json from URL, the cursor is not placed in the data entry box
<img width="398" alt="screenshot 2018-04-27 07 28 27" src="https://user-images.githubusercontent.com/9379524/39333019-ac8f694e-49ec-11e8-93f4-edc6938fd683.png">
It should be
<img width="406" alt="screenshot 2018-04-27 07 30 53" src="https://user-images.githubusercontent.com/9379524/39333090-f0dc544a-49ec-11e8-92d5-e1e8b6976466.png">
| priority | on prompting for url place cursor in prompt desired behaviour on open data package zip or json from url the cursor is not placed in the data entry box img width alt screenshot src it should be img width alt screenshot src | 1 |
256,877 | 8,130,017,220 | IssuesEvent | 2018-08-17 16:57:49 | goharbor/harbor | https://api.github.com/repos/goharbor/harbor | closed | It takes very long for smtp.exmail.qq.com to return if credentials are incorrect. | dependency/external kind/bug priority/low | With current code if the mail server is set to
host: smtp.exmail.qq.com
port: 465
ssl: true
And when the credential is incorrect, the function ```sendMailWithTLS``` takes 1 minute to return.
Per investigation seems smtp package does not work very well with this mail server.
An issue has been opened to golang to track:
https://github.com/golang/go/issues/18094
We don't plan to hack go's library to workaround this issue at least for 0.5.0. | 1.0 | It takes very long for smtp.exmail.qq.com to return if credentials are incorrect. - With current code if the mail server is set to
host: smtp.exmail.qq.com
port: 465
ssl: true
And when the credential is incorrect, the function ```sendMailWithTLS``` takes 1 minute to return.
Per investigation seems smtp package does not work very well with this mail server.
An issue has been opened to golang to track:
https://github.com/golang/go/issues/18094
We don't plan to hack go's library to workaround this issue at least for 0.5.0. | priority | it takes very long for smtp exmail qq com to return if credentials are incorrect with current code if the mail server is set to host smtp exmail qq com port ssl true and when the credential is incorrect the function sendmailwithtls takes minute to return per investigation seems smtp package does not work very well with this mail server an issue has been opened to golang to track we don t plan to hack go s library to workaround this issue at least for | 1 |
365,023 | 10,774,854,316 | IssuesEvent | 2019-11-03 09:58:21 | SOSML/SOSML | https://api.github.com/repos/SOSML/SOSML | closed | Free type variables broken | p9: low priority s:elaboration t:squid | ``` SML
val s = ref [];
fun push a x = a := (x::(!a));
push s 1;
push s true;
s;
``` | 1.0 | Free type variables broken - ``` SML
val s = ref [];
fun push a x = a := (x::(!a));
push s 1;
push s true;
s;
``` | priority | free type variables broken sml val s ref fun push a x a x a push s push s true s | 1 |
633,889 | 20,269,257,298 | IssuesEvent | 2022-02-15 14:52:18 | slsdetectorgroup/slsDetectorPackage | https://api.github.com/repos/slsdetectorgroup/slsDetectorPackage | closed | Disable file write | priority - Low action - Change status - resolved | <!-- Preview changes before submitting -->
<!-- Please fill out everything with an *, as this report will be discarded otherwise -->
<!-- This is a comment, the syntax is a bit different from c++ or bash -->
##### *Detector type:
<!-- If applicable, Eiger, Jungfrau, Mythen3, Gotthard2, Gotthard, Moench, ChipTestBoard -->
##### *Software Package Version:
<!-- developer, 4.2.0, 4.1.1, etc -->
##### Priority:
<!-- Super Low, Low, Medium, High, Super High -->
Low
##### *State the change request:
<!-- A clear and concise description of what the change is to an existing feature -->
Disable file write by default
##### Is your change request related to a problem. Please describe:
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
##### Additional context:
<!-- Add any other context about the feature here -->
| 1.0 | Disable file write - <!-- Preview changes before submitting -->
<!-- Please fill out everything with an *, as this report will be discarded otherwise -->
<!-- This is a comment, the syntax is a bit different from c++ or bash -->
##### *Detector type:
<!-- If applicable, Eiger, Jungfrau, Mythen3, Gotthard2, Gotthard, Moench, ChipTestBoard -->
##### *Software Package Version:
<!-- developer, 4.2.0, 4.1.1, etc -->
##### Priority:
<!-- Super Low, Low, Medium, High, Super High -->
Low
##### *State the change request:
<!-- A clear and concise description of what the change is to an existing feature -->
Disable file write by default
##### Is your change request related to a problem. Please describe:
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
##### Additional context:
<!-- Add any other context about the feature here -->
| priority | disable file write detector type software package version priority low state the change request disable file write by default is your change request related to a problem please describe additional context | 1 |
451,636 | 13,039,575,016 | IssuesEvent | 2020-07-28 16:59:14 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | about skill Round Trip & Flaming Petals and some item wrong script | component:database mode:renewal priority:low status:confirmed type:bug | <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/9000948c3c524c98f472da716164ac3d5954706f
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20200401
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
* Result: <!-- Describe the issue that you experienced in detail. -->
1. Round trip not affected by Long range modifiers
2. When used Fire Charm , the Flaming Petals only increased 10% damage.
3. item 1384 script bonus4 bAutoSpellOnSkill,"BS_HAMMERFALL",50,3,"SM_MAGNUM"; to bonus4 bAutoSpellOnSkill,"BS_HAMMERFALL","SM_MAGNUM",50,3;
4. item comdo
(1) 28910:20800 and 20800:4045 miss job limited (Guillotine_Cross) https://www.divine-pride.net/database/item/20800/ (JRO item description)
(2) 19204:29353 and 19204:29354 change WS_CARTBOOST to GN_CARTBOOST because WS_CARTBOOST max skill level is 1.
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
1. Round trip :
(1) @item 28241 x2 and 4633 x2
(2) Add card to one weapon (longatkrate+10% x2)
(3) Tested with two weapon and there is no any damage increased.
2. Flaming Petals
According to https://github.com/rathena/rathena/pull/4425 and https://www.divine-pride.net/forum/index.php?/topic/3674-note-update-schedule-in-the-first-quarter-of-2019/&tab=comments#comment-6599
Thers is no any info changed skill damage ratio from 20% to 10%?
* Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
| 1.0 | about skill Round Trip & Flaming Petals and some item wrong script - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/9000948c3c524c98f472da716164ac3d5954706f
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20200401
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
* Result: <!-- Describe the issue that you experienced in detail. -->
1. Round trip not affected by Long range modifiers
2. When used Fire Charm , the Flaming Petals only increased 10% damage.
3. item 1384 script bonus4 bAutoSpellOnSkill,"BS_HAMMERFALL",50,3,"SM_MAGNUM"; to bonus4 bAutoSpellOnSkill,"BS_HAMMERFALL","SM_MAGNUM",50,3;
4. item comdo
(1) 28910:20800 and 20800:4045 miss job limited (Guillotine_Cross) https://www.divine-pride.net/database/item/20800/ (JRO item description)
(2) 19204:29353 and 19204:29354 change WS_CARTBOOST to GN_CARTBOOST because WS_CARTBOOST max skill level is 1.
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
1. Round trip :
(1) @item 28241 x2 and 4633 x2
(2) Add card to one weapon (longatkrate+10% x2)
(3) Tested with two weapon and there is no any damage increased.
2. Flaming Petals
According to https://github.com/rathena/rathena/pull/4425 and https://www.divine-pride.net/forum/index.php?/topic/3674-note-update-schedule-in-the-first-quarter-of-2019/&tab=comments#comment-6599
Thers is no any info changed skill damage ratio from 20% to 10%?
* Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
| priority | about skill round trip flaming petals and some item wrong script rathena hash please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode renewal description of issue result round trip not affected by long range modifiers when used fire charm the flaming petals only increased damage item script bautospellonskill bs hammerfall sm magnum to bautospellonskill bs hammerfall sm magnum item comdo and miss job limited guillotine cross jro item description and change ws cartboost to gn cartboost because ws cartboost max skill level is expected result how to reproduce round trip item and add card to one weapon longatkrate tested with two weapon and there is no any damage increased flaming petals according to and thers is no any info changed skill damage ratio from to official information modifications that may affect results | 1 |
745,148 | 25,972,235,422 | IssuesEvent | 2022-12-19 12:12:47 | KinsonDigital/CASL | https://api.github.com/repos/KinsonDigital/CASL | opened | 🚧Update build system to CICD | workflow high priority preview | ### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Update the build system to use [CICD](https://github.com/KinsonDigital/CICD).
This will require removing all of the current workflows and using the workflows that come with CICD.
Update CICD to the latest version as of the implementation of this issue.
### Acceptance Criteria
- [ ] _**CICD**_ dotnet tool added to the solution
- [ ] Workflows replaced/updated.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | 1.0 | 🚧Update build system to CICD - ### Complete The Item Below
- [X] I have updated the title without removing the 🚧 emoji.
### Description
Update the build system to use [CICD](https://github.com/KinsonDigital/CICD).
This will require removing all of the current workflows and using the workflows that come with CICD.
Update CICD to the latest version as of the implementation of this issue.
### Acceptance Criteria
- [ ] _**CICD**_ dotnet tool added to the solution
- [ ] Workflows replaced/updated.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `🐛bug` |
| Breaking Changes | `🧨breaking changes` |
| New Feature | `✨new feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `🗒️documentation/code` |
| Product Doc Changes | `📝documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | priority | 🚧update build system to cicd complete the item below i have updated the title without removing the 🚧 emoji description update the build system to use this will require removing all of the current workflows and using the workflows that come with cicd update cicd to the latest version as of the implementation of this issue acceptance criteria cicd dotnet tool added to the solution workflows replaced updated todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if applicable issue linked to the correct milestone if applicable draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes 🐛bug breaking changes 🧨breaking changes new feature ✨new feature workflow changes workflow code doc changes 🗒️documentation code product doc changes 📝documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct | 1 |
297,449 | 9,168,369,154 | IssuesEvent | 2019-03-02 21:39:56 | project-koku/koku | https://api.github.com/repos/project-koku/koku | opened | Typo/extra space in error message | bug priority - low | **Describe the bug**
Typo/extra space in error message
**To Reproduce**
Steps to reproduce the behavior:
1. Go to v1/providers/
2. Try to add an AWS provider without specifying 'Provider Resource Name'
```
"detail": "Unable to obtain credentials with using .",
```
Error message should actually say something like 'Provider resource name must not be blank'
Also I'm not sure the wording makes sense in general when you provider an ARN:
```
"detail": "Unable to obtain credentials with using test.",
```
'with using'
I think we should drop one word or reword this.
'Unable to obtain credentials using test'
| 1.0 | Typo/extra space in error message - **Describe the bug**
Typo/extra space in error message
**To Reproduce**
Steps to reproduce the behavior:
1. Go to v1/providers/
2. Try to add an AWS provider without specifying 'Provider Resource Name'
```
"detail": "Unable to obtain credentials with using .",
```
Error message should actually say something like 'Provider resource name must not be blank'
Also I'm not sure the wording makes sense in general when you provider an ARN:
```
"detail": "Unable to obtain credentials with using test.",
```
'with using'
I think we should drop one word or reword this.
'Unable to obtain credentials using test'
| priority | typo extra space in error message describe the bug typo extra space in error message to reproduce steps to reproduce the behavior go to providers try to add an aws provider without specifying provider resource name detail unable to obtain credentials with using error message should actually say something like provider resource name must not be blank also i m not sure the wording makes sense in general when you provider an arn detail unable to obtain credentials with using test with using i think we should drop one word or reword this unable to obtain credentials using test | 1 |
610,044 | 18,892,825,975 | IssuesEvent | 2021-11-15 14:56:51 | forumone/gesso | https://api.github.com/repos/forumone/gesso | opened | [Gesso 5] Additional keysort issues | low priority | Split from #49
```
{% set numbers = {
'2': 'Number 2',
'0': 'Number 0',
'0.5': 'Number 0.5',
'3': 'Number 3',
'2.5': 'Number 2.5',
'1': 'Number 1',
} %}
{% set letters = {
'c': 'Letter C',
'b': 'Letter B',
'e': 'Letter E',
'a': 'Letter A',
'd': 'Letter D',
} %}
{% set sorted_numbers = numbers|keysort %}
{% set sorted_letters = letters|keysort %}
```
sorts as
```
Number 0
Number 1
Number 2
Number 3
2,0,0.5,3,2.5,1
Number 0.5
Number 2.5
Letter D
c,b,e,a,d
Letter A
Letter E
Letter B
Letter C
```
in Storybook.
In addition,
```
{% set data = {
'a': 'Letter A',
'2': 'Number 2',
'0': 'Number 0',
'0.5': 'Number 0.5',
'b': 'Letter B',
'3': 'Number 3',
'c': 'Letter C',
'2.5': 'Number 2.5',
'1': 'Number 1',
} %}
{% set sorted_data = data|keysort %}
<ol>
{% for item in sorted_data %}
<li>{{ item|trim }}</li>
{% endfor %}
</ol>
```
sorts as
```
Number 0
Number 1
Number 2
Number 3
a,2,0,0.5,b,3,c,2.5,1
Number 2.5
Letter C
Letter B
Number 0.5
Letter A
```
in Storybook. Better would be to match the Drupal ksort() results for consistency:
```
"a" => "Letter A"
0 => "Number 0"
"0.5" => "Number 0.5"
"2.5" => "Number 2.5"
"b" => "Letter B"
"c" => "Letter C"
1 => "Number 1"
2 => "Number 2"
3 => "Number 3"
``` | 1.0 | [Gesso 5] Additional keysort issues - Split from #49
```
{% set numbers = {
'2': 'Number 2',
'0': 'Number 0',
'0.5': 'Number 0.5',
'3': 'Number 3',
'2.5': 'Number 2.5',
'1': 'Number 1',
} %}
{% set letters = {
'c': 'Letter C',
'b': 'Letter B',
'e': 'Letter E',
'a': 'Letter A',
'd': 'Letter D',
} %}
{% set sorted_numbers = numbers|keysort %}
{% set sorted_letters = letters|keysort %}
```
sorts as
```
Number 0
Number 1
Number 2
Number 3
2,0,0.5,3,2.5,1
Number 0.5
Number 2.5
Letter D
c,b,e,a,d
Letter A
Letter E
Letter B
Letter C
```
in Storybook.
In addition,
```
{% set data = {
'a': 'Letter A',
'2': 'Number 2',
'0': 'Number 0',
'0.5': 'Number 0.5',
'b': 'Letter B',
'3': 'Number 3',
'c': 'Letter C',
'2.5': 'Number 2.5',
'1': 'Number 1',
} %}
{% set sorted_data = data|keysort %}
<ol>
{% for item in sorted_data %}
<li>{{ item|trim }}</li>
{% endfor %}
</ol>
```
sorts as
```
Number 0
Number 1
Number 2
Number 3
a,2,0,0.5,b,3,c,2.5,1
Number 2.5
Letter C
Letter B
Number 0.5
Letter A
```
in Storybook. Better would be to match the Drupal ksort() results for consistency:
```
"a" => "Letter A"
0 => "Number 0"
"0.5" => "Number 0.5"
"2.5" => "Number 2.5"
"b" => "Letter B"
"c" => "Letter C"
1 => "Number 1"
2 => "Number 2"
3 => "Number 3"
``` | priority | additional keysort issues split from set numbers number number number number number number set letters c letter c b letter b e letter e a letter a d letter d set sorted numbers numbers keysort set sorted letters letters keysort sorts as number number number number number number letter d c b e a d letter a letter e letter b letter c in storybook in addition set data a letter a number number number b letter b number c letter c number number set sorted data data keysort for item in sorted data item trim endfor sorts as number number number number a b c number letter c letter b number letter a in storybook better would be to match the drupal ksort results for consistency a letter a number number number b letter b c letter c number number number | 1 |
417,766 | 12,178,864,024 | IssuesEvent | 2020-04-28 09:42:25 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [commons] Fix warning messages when building commons | priority: low quality | Please fix the following warning messages when building commons:
```
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java: /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java uses unchecked or unsafe operations.
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java: Recompile with -Xlint:unchecked for details.
```
```
3 warnings
[WARNING] Javadoc Warnings
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[INFO] Building jar: /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/target/crafter-commons-upgrade-manager-3.1.5-SNAPSHOT-javadoc.jar
```
| 1.0 | [commons] Fix warning messages when building commons - Please fix the following warning messages when building commons:
```
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java: /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java uses unchecked or unsafe operations.
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java: Recompile with -Xlint:unchecked for details.
```
```
3 warnings
[WARNING] Javadoc Warnings
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[WARNING] /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/src/main/java/org/craftercms/commons/upgrade/impl/pipeline/DefaultUpgradePipelineFactoryImpl.java:67: warning - Tag @link: reference not found: this#DEFAULT_PIPELINE_PREFIX
[INFO] Building jar: /Users/vita/temp/test3/craftercms/src/commons/upgrade-manager/target/crafter-commons-upgrade-manager-3.1.5-SNAPSHOT-javadoc.jar
```
| priority | fix warning messages when building commons please fix the following warning messages when building commons users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java uses unchecked or unsafe operations users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java recompile with xlint unchecked for details warnings javadoc warnings users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java warning tag link reference not found this default pipeline prefix users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java warning tag link reference not found this default pipeline prefix users vita temp craftercms src commons upgrade manager src main java org craftercms commons upgrade impl pipeline defaultupgradepipelinefactoryimpl java warning tag link reference not found this default pipeline prefix building jar users vita temp craftercms src commons upgrade manager target crafter commons upgrade manager snapshot javadoc jar | 1 |
247,399 | 7,918,225,376 | IssuesEvent | 2018-07-04 12:37:04 | qutebrowser/qutebrowser | https://api.github.com/repos/qutebrowser/qutebrowser | opened | Add setting for QtWebEngine process models | component: QtWebEngine easy priority: 2 - low qt: 5.11 | See https://doc.qt.io/qt-5/qtwebengine-features.html#process-models and [QTBUG-65561](https://bugreports.qt.io/browse/QTBUG-65561). Likely needs Qt 5.11.
Not sure if we should add a setting value for `single-process` given its security/stability implications. | 1.0 | Add setting for QtWebEngine process models - See https://doc.qt.io/qt-5/qtwebengine-features.html#process-models and [QTBUG-65561](https://bugreports.qt.io/browse/QTBUG-65561). Likely needs Qt 5.11.
Not sure if we should add a setting value for `single-process` given its security/stability implications. | priority | add setting for qtwebengine process models see and likely needs qt not sure if we should add a setting value for single process given its security stability implications | 1 |
289,835 | 8,877,120,257 | IssuesEvent | 2019-01-12 21:17:37 | Scr1ptK1tt13s/overdeer-api | https://api.github.com/repos/Scr1ptK1tt13s/overdeer-api | closed | Add pets API | Priority: Low Status: On Hold Type: Enhancement | I tried to add a pet with POST API (as seen in https://docs.scriptkitties.space) but it doesn't work, all I get is `405 Not Allowed`.

| 1.0 | Add pets API - I tried to add a pet with POST API (as seen in https://docs.scriptkitties.space) but it doesn't work, all I get is `405 Not Allowed`.

| priority | add pets api i tried to add a pet with post api as seen in but it doesn t work all i get is not allowed | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.