Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
154,121
| 5,909,773,769
|
IssuesEvent
|
2017-05-20 03:12:36
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
AttributeError: 'str' object has no attribute 'change_orientation'
|
bug priority:HIGH sct_propseg wontfix
|
data: 20170519_issue1335
~~~
sct_propseg -i mt0.nii.gz -c t2s -qc ~/qc_test1
--
Spinal Cord Toolbox (master/5f1beeea11d952cc36ad0c40b1cebd600e4ee21e)
Running /Users/julien/code/sct/scripts/sct_propseg.py -i mt0.nii.gz -c t2s -qc /Users/julien/qc_test1
Check folder existence...
Detecting the spinal cord using OptiC
isct_propseg -i "/Users/julien/data/temp/mt/mt0.nii.gz" -t t2 -o "./" -verbose -init-centerline ./mt0_centerline_optic.nii.gz -centerline-binary
Initialization - using given centerline
Total propagation length = 110.967 mm
Segmentation finished. To view results, type:
fslview /Users/julien/data/temp/mt/mt0.nii.gz ./mt0_seg.nii.gz &
Check consistency of segmentation...
Create temporary folder...
sct_image -i tmp.segmentation.nii.gz -setorient RPI -o tmp.segmentation_RPI.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.segmentation.nii.gz -setorient RPI -o tmp.segmentation_RPI.nii.gz
tmp.segmentation.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
Generate output files...
WARNING: File tmp.segmentation_RPI.nii.gz already exists. Deleting it.
Created file(s):
--> tmp.segmentation_RPI.nii.gz
sct_image -i tmp.centerline.nii.gz -setorient RPI -o tmp.centerline_RPI.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.centerline.nii.gz -setorient RPI -o tmp.centerline_RPI.nii.gz
tmp.centerline.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
Generate output files...
WARNING: File tmp.centerline_RPI.nii.gz already exists. Deleting it.
Created file(s):
--> tmp.centerline_RPI.nii.gz
Get data dimensions...
sct_image -i tmp.segmentation_RPI_c.nii.gz -setorient RPI -o ../mt0_seg.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.segmentation_RPI_c.nii.gz -setorient RPI -o ../mt0_seg.nii.gz
tmp.segmentation_RPI_c.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
rm -f tmp.segmentation_RPI_c_RPI.nii.gz
Generate output files...
WARNING: File ../mt0_seg.nii.gz already exists. Deleting it.
Created file(s):
--> ../mt0_seg.nii.gz
Remove temporary files...
WARNING: File mt0_seg.nii.gz already exists. Deleting it.
Remove temporary files...
Traceback (most recent call last):
File "/Users/julien/code/sct/scripts/sct_propseg.py", line 599, in <module>
test(qcslice.Axial(fname_input_data, fname_seg))
File "/Users/julien/code/sct/python/lib/python2.7/site-packages/spinalcordtoolbox/reports/slice.py", line 43, in __init__
self.image.change_orientation('SAL')
AttributeError: 'str' object has no attribute 'change_orientation'
~~~
|
1.0
|
AttributeError: 'str' object has no attribute 'change_orientation' - data: 20170519_issue1335
~~~
sct_propseg -i mt0.nii.gz -c t2s -qc ~/qc_test1
--
Spinal Cord Toolbox (master/5f1beeea11d952cc36ad0c40b1cebd600e4ee21e)
Running /Users/julien/code/sct/scripts/sct_propseg.py -i mt0.nii.gz -c t2s -qc /Users/julien/qc_test1
Check folder existence...
Detecting the spinal cord using OptiC
isct_propseg -i "/Users/julien/data/temp/mt/mt0.nii.gz" -t t2 -o "./" -verbose -init-centerline ./mt0_centerline_optic.nii.gz -centerline-binary
Initialization - using given centerline
Total propagation length = 110.967 mm
Segmentation finished. To view results, type:
fslview /Users/julien/data/temp/mt/mt0.nii.gz ./mt0_seg.nii.gz &
Check consistency of segmentation...
Create temporary folder...
sct_image -i tmp.segmentation.nii.gz -setorient RPI -o tmp.segmentation_RPI.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.segmentation.nii.gz -setorient RPI -o tmp.segmentation_RPI.nii.gz
tmp.segmentation.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
Generate output files...
WARNING: File tmp.segmentation_RPI.nii.gz already exists. Deleting it.
Created file(s):
--> tmp.segmentation_RPI.nii.gz
sct_image -i tmp.centerline.nii.gz -setorient RPI -o tmp.centerline_RPI.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.centerline.nii.gz -setorient RPI -o tmp.centerline_RPI.nii.gz
tmp.centerline.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
Generate output files...
WARNING: File tmp.centerline_RPI.nii.gz already exists. Deleting it.
Created file(s):
--> tmp.centerline_RPI.nii.gz
Get data dimensions...
sct_image -i tmp.segmentation_RPI_c.nii.gz -setorient RPI -o ../mt0_seg.nii.gz
Running /Users/julien/code/sct/scripts/sct_image.py -i tmp.segmentation_RPI_c.nii.gz -setorient RPI -o ../mt0_seg.nii.gz
tmp.segmentation_RPI_c.nii.gz
Get dimensions of data...
256 x 256 x 15 x 1
Change orientation...
rm -f tmp.segmentation_RPI_c_RPI.nii.gz
Generate output files...
WARNING: File ../mt0_seg.nii.gz already exists. Deleting it.
Created file(s):
--> ../mt0_seg.nii.gz
Remove temporary files...
WARNING: File mt0_seg.nii.gz already exists. Deleting it.
Remove temporary files...
Traceback (most recent call last):
File "/Users/julien/code/sct/scripts/sct_propseg.py", line 599, in <module>
test(qcslice.Axial(fname_input_data, fname_seg))
File "/Users/julien/code/sct/python/lib/python2.7/site-packages/spinalcordtoolbox/reports/slice.py", line 43, in __init__
self.image.change_orientation('SAL')
AttributeError: 'str' object has no attribute 'change_orientation'
~~~
|
priority
|
attributeerror str object has no attribute change orientation data sct propseg i nii gz c qc qc spinal cord toolbox master running users julien code sct scripts sct propseg py i nii gz c qc users julien qc check folder existence detecting the spinal cord using optic isct propseg i users julien data temp mt nii gz t o verbose init centerline centerline optic nii gz centerline binary initialization using given centerline total propagation length mm segmentation finished to view results type fslview users julien data temp mt nii gz seg nii gz check consistency of segmentation create temporary folder sct image i tmp segmentation nii gz setorient rpi o tmp segmentation rpi nii gz running users julien code sct scripts sct image py i tmp segmentation nii gz setorient rpi o tmp segmentation rpi nii gz tmp segmentation nii gz get dimensions of data x x x change orientation generate output files warning file tmp segmentation rpi nii gz already exists deleting it created file s tmp segmentation rpi nii gz sct image i tmp centerline nii gz setorient rpi o tmp centerline rpi nii gz running users julien code sct scripts sct image py i tmp centerline nii gz setorient rpi o tmp centerline rpi nii gz tmp centerline nii gz get dimensions of data x x x change orientation generate output files warning file tmp centerline rpi nii gz already exists deleting it created file s tmp centerline rpi nii gz get data dimensions sct image i tmp segmentation rpi c nii gz setorient rpi o seg nii gz running users julien code sct scripts sct image py i tmp segmentation rpi c nii gz setorient rpi o seg nii gz tmp segmentation rpi c nii gz get dimensions of data x x x change orientation rm f tmp segmentation rpi c rpi nii gz generate output files warning file seg nii gz already exists deleting it created file s seg nii gz remove temporary files warning file seg nii gz already exists deleting it remove temporary files traceback most recent call last file users julien code sct scripts sct propseg py line in test qcslice axial fname input data fname seg file users julien code sct python lib site packages spinalcordtoolbox reports slice py line in init self image change orientation sal attributeerror str object has no attribute change orientation
| 1
|
542,991
| 15,875,833,937
|
IssuesEvent
|
2021-04-09 07:34:14
|
Project-Books/book-project
|
https://api.github.com/repos/Project-Books/book-project
|
closed
|
Fix transient object exception
|
bug hibernate high-priority
|
**Describe the bug**
The test `createJsonRepresentationForBooks()` in `BookServiceTest.java` is failing because a 'tag' is not saved before flushing.
**To Reproduce**
Steps to reproduce the behaviour:
1. Run the `createJsonRepresentationForBooks() test
**Expected behaviour**
Test should pass.
**Additional context**
Stack trace: https://gist.github.com/knjk04/f0fb2886214dfeb8c3139943bda80189
Branch: `0.2.0`. In your pull request, set the destination branch as `0.2.0`.
|
1.0
|
Fix transient object exception - **Describe the bug**
The test `createJsonRepresentationForBooks()` in `BookServiceTest.java` is failing because a 'tag' is not saved before flushing.
**To Reproduce**
Steps to reproduce the behaviour:
1. Run the `createJsonRepresentationForBooks() test
**Expected behaviour**
Test should pass.
**Additional context**
Stack trace: https://gist.github.com/knjk04/f0fb2886214dfeb8c3139943bda80189
Branch: `0.2.0`. In your pull request, set the destination branch as `0.2.0`.
|
priority
|
fix transient object exception describe the bug the test createjsonrepresentationforbooks in bookservicetest java is failing because a tag is not saved before flushing to reproduce steps to reproduce the behaviour run the createjsonrepresentationforbooks test expected behaviour test should pass additional context stack trace branch in your pull request set the destination branch as
| 1
|
617,480
| 19,358,763,784
|
IssuesEvent
|
2021-12-16 00:55:43
|
UC-Davis-molecular-computing/scadnano
|
https://api.github.com/repos/UC-Davis-molecular-computing/scadnano
|
closed
|
make scadnano pitch angle agree with oxDNA
|
invalid high priority closed in dev
|
**Note:** This is a breaking change since it will change how oxDNA output works.
Currently, scadnano interprets the pitch angle as a clockwise rotation in the Y-Z plane, following SVG convention. The following design has a helix group (containing helix 1) with pitch=45 (clockwise, away from the single strand on helix 0):

However, exporting to oxDNA rotates the helix in the opposite direction (counter-clockwise, towards the single strand on helix 0):

The two conventions should match, either by rotating counter-clockwise in the scadnano main view, or by changing the oxDNA export code to rotate clockwise in the Y-Z plane. **UPDATE:** We changed the oxDNA export to rotate clockwise in the Y-Z plane.
|
1.0
|
make scadnano pitch angle agree with oxDNA - **Note:** This is a breaking change since it will change how oxDNA output works.
Currently, scadnano interprets the pitch angle as a clockwise rotation in the Y-Z plane, following SVG convention. The following design has a helix group (containing helix 1) with pitch=45 (clockwise, away from the single strand on helix 0):

However, exporting to oxDNA rotates the helix in the opposite direction (counter-clockwise, towards the single strand on helix 0):

The two conventions should match, either by rotating counter-clockwise in the scadnano main view, or by changing the oxDNA export code to rotate clockwise in the Y-Z plane. **UPDATE:** We changed the oxDNA export to rotate clockwise in the Y-Z plane.
|
priority
|
make scadnano pitch angle agree with oxdna note this is a breaking change since it will change how oxdna output works currently scadnano interprets the pitch angle as a clockwise rotation in the y z plane following svg convention the following design has a helix group containing helix with pitch clockwise away from the single strand on helix however exporting to oxdna rotates the helix in the opposite direction counter clockwise towards the single strand on helix the two conventions should match either by rotating counter clockwise in the scadnano main view or by changing the oxdna export code to rotate clockwise in the y z plane update we changed the oxdna export to rotate clockwise in the y z plane
| 1
|
800,861
| 28,436,057,293
|
IssuesEvent
|
2023-04-15 10:21:26
|
svthalia/concrexit
|
https://api.github.com/repos/svthalia/concrexit
|
closed
|
Thabloid preview is broken
|
priority: high thabloid bug request-for-comments
|
### Describe the bug
The preview of thabloids does not work since S3:
- https://thalia.nu/members/thabloid/pages/2022/3/ returns `[]` (see https://github.com/svthalia/concrexit/blob/ce784be158c2e26afa9d389d67065db1cb1a716c/website/thabloid/models.py#L82)
- The 'pages' idea of thabloid is in general pretty broken. It seems like pages are getting created when requesting https://thalia.nu/members/thabloid/ because it's incredibly slow.
### How to reproduce
Open https://thalia.nu/members/thabloid/.
It's probably a good idea to rework the thabloid saving logic quite thoroughly. It might be good to add a `ThabloidPage` model just to keep track of the files that are being created more clearly.
The hacky ghostscript used to thumbnail the pages is quite ugly anyway. So personally I wouldn't mind dropping the page viewer alltogether. Then we still need to get the frontpages somehow for the cover, but it would be great if we can get rid of ghostscript entirely.
|
1.0
|
Thabloid preview is broken - ### Describe the bug
The preview of thabloids does not work since S3:
- https://thalia.nu/members/thabloid/pages/2022/3/ returns `[]` (see https://github.com/svthalia/concrexit/blob/ce784be158c2e26afa9d389d67065db1cb1a716c/website/thabloid/models.py#L82)
- The 'pages' idea of thabloid is in general pretty broken. It seems like pages are getting created when requesting https://thalia.nu/members/thabloid/ because it's incredibly slow.
### How to reproduce
Open https://thalia.nu/members/thabloid/.
It's probably a good idea to rework the thabloid saving logic quite thoroughly. It might be good to add a `ThabloidPage` model just to keep track of the files that are being created more clearly.
The hacky ghostscript used to thumbnail the pages is quite ugly anyway. So personally I wouldn't mind dropping the page viewer alltogether. Then we still need to get the frontpages somehow for the cover, but it would be great if we can get rid of ghostscript entirely.
|
priority
|
thabloid preview is broken describe the bug the preview of thabloids does not work since returns see the pages idea of thabloid is in general pretty broken it seems like pages are getting created when requesting because it s incredibly slow how to reproduce open it s probably a good idea to rework the thabloid saving logic quite thoroughly it might be good to add a thabloidpage model just to keep track of the files that are being created more clearly the hacky ghostscript used to thumbnail the pages is quite ugly anyway so personally i wouldn t mind dropping the page viewer alltogether then we still need to get the frontpages somehow for the cover but it would be great if we can get rid of ghostscript entirely
| 1
|
53,833
| 3,051,658,058
|
IssuesEvent
|
2015-08-12 10:00:50
|
Metaswitch/sprout
|
https://api.github.com/repos/Metaswitch/sprout
|
closed
|
Assert in pjsip timer when connection fails to SIP peer.
|
bug high-priority
|
See lots of log spam like the following when running stress.
```
25-06-2015 18:03:07.317 UTC Error pjsip: Assert failed: ../src/pj/timer.c:492 entry->_timer_id < 1
25-06-2015 18:03:07.317 UTC Error pjsip: tcpc0x7f85748f TCP connect() error: Connection refused [code=120111]
```
I suspect this is related to https://github.com/Metaswitch/pjsip-upstream/pull/35, but I can't really see how. I suspect the next step will be to get the stack when we hit this.
|
1.0
|
Assert in pjsip timer when connection fails to SIP peer. - See lots of log spam like the following when running stress.
```
25-06-2015 18:03:07.317 UTC Error pjsip: Assert failed: ../src/pj/timer.c:492 entry->_timer_id < 1
25-06-2015 18:03:07.317 UTC Error pjsip: tcpc0x7f85748f TCP connect() error: Connection refused [code=120111]
```
I suspect this is related to https://github.com/Metaswitch/pjsip-upstream/pull/35, but I can't really see how. I suspect the next step will be to get the stack when we hit this.
|
priority
|
assert in pjsip timer when connection fails to sip peer see lots of log spam like the following when running stress utc error pjsip assert failed src pj timer c entry timer id utc error pjsip tcp connect error connection refused i suspect this is related to but i can t really see how i suspect the next step will be to get the stack when we hit this
| 1
|
304,220
| 9,328,953,474
|
IssuesEvent
|
2019-03-28 00:08:18
|
Wraithaven/WraithEngine3
|
https://api.github.com/repos/Wraithaven/WraithEngine3
|
closed
|
Packet Protocol Encryption
|
enhancement high priority security
|
**Is your feature request related to a problem? Please describe.**
With the current developed implementation of the packet protocol for server networking, information is sent over the network unencrypted. This is a huge issue for certain bits of information being sent. This is highly vulnerable to cracks, information theft, and information modification. This is a huge issue and needs to be addressed. It can also make it extremely easy for users to fake the identity of other users.
**Describe the solution you'd like**
Packets should be encrypted before being sent over the network. This also ties into user authentication, which is highly related. Packets should be read by the server and the client, no one else.
**Steps to Solve**
An official authentication server should be set up to allow users to log in and have their identity verified. Afterward, a secure connection must be established between the user server and the client to allow for packets to be sent through.
|
1.0
|
Packet Protocol Encryption - **Is your feature request related to a problem? Please describe.**
With the current developed implementation of the packet protocol for server networking, information is sent over the network unencrypted. This is a huge issue for certain bits of information being sent. This is highly vulnerable to cracks, information theft, and information modification. This is a huge issue and needs to be addressed. It can also make it extremely easy for users to fake the identity of other users.
**Describe the solution you'd like**
Packets should be encrypted before being sent over the network. This also ties into user authentication, which is highly related. Packets should be read by the server and the client, no one else.
**Steps to Solve**
An official authentication server should be set up to allow users to log in and have their identity verified. Afterward, a secure connection must be established between the user server and the client to allow for packets to be sent through.
|
priority
|
packet protocol encryption is your feature request related to a problem please describe with the current developed implementation of the packet protocol for server networking information is sent over the network unencrypted this is a huge issue for certain bits of information being sent this is highly vulnerable to cracks information theft and information modification this is a huge issue and needs to be addressed it can also make it extremely easy for users to fake the identity of other users describe the solution you d like packets should be encrypted before being sent over the network this also ties into user authentication which is highly related packets should be read by the server and the client no one else steps to solve an official authentication server should be set up to allow users to log in and have their identity verified afterward a secure connection must be established between the user server and the client to allow for packets to be sent through
| 1
|
605,752
| 18,740,168,992
|
IssuesEvent
|
2021-11-04 12:44:34
|
vignetteapp/vignette
|
https://api.github.com/repos/vignetteapp/vignette
|
closed
|
Live2D binaries are not included by default in Portable Distributions.
|
bug priority:high
|
For some reason we are no longer supplying Live2DCubism with our ZIP files. This should be included regardless since we expect the user downloading this ZIPs to have them immediately. Did something change during the restructuring?
|
1.0
|
Live2D binaries are not included by default in Portable Distributions. - For some reason we are no longer supplying Live2DCubism with our ZIP files. This should be included regardless since we expect the user downloading this ZIPs to have them immediately. Did something change during the restructuring?
|
priority
|
binaries are not included by default in portable distributions for some reason we are no longer supplying with our zip files this should be included regardless since we expect the user downloading this zips to have them immediately did something change during the restructuring
| 1
|
734,977
| 25,372,967,931
|
IssuesEvent
|
2022-11-21 12:00:30
|
CLOSER-Cohorts/archivist
|
https://api.github.com/repos/CLOSER-Cohorts/archivist
|
closed
|
REACT: variables.txt is not loading correctly
|
bug High priority
|
e.g. https://closer-archivist-staging.herokuapp.com/datasets/734/variables.txt?token=eyJhbGciOiJIUzI1NiJ9.eyJpZCI6NjcsImFwaV9rZXkiOiJjNWFhZTg4MjUzOTM0NjllODU5NSJ9.donSPbb8rCNs5S0tZDujkix-awvn_qaDj9ZW2aFiqu4
It should look something like this

But it looks like this

I can get the variables from the tv.txt and the backend so not high priority.
|
1.0
|
REACT: variables.txt is not loading correctly - e.g. https://closer-archivist-staging.herokuapp.com/datasets/734/variables.txt?token=eyJhbGciOiJIUzI1NiJ9.eyJpZCI6NjcsImFwaV9rZXkiOiJjNWFhZTg4MjUzOTM0NjllODU5NSJ9.donSPbb8rCNs5S0tZDujkix-awvn_qaDj9ZW2aFiqu4
It should look something like this

But it looks like this

I can get the variables from the tv.txt and the backend so not high priority.
|
priority
|
react variables txt is not loading correctly e g it should look something like this but it looks like this i can get the variables from the tv txt and the backend so not high priority
| 1
|
616,596
| 19,306,961,391
|
IssuesEvent
|
2021-12-13 12:35:55
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Parser does not support nested ternary expressions
|
Type/Bug Priority/High Team/CompilerFE Area/Parser Error/TypeK Lang/Expressions/ConditionalExpr
|
**Description:**
$title.
**Steps to reproduce:**
```ballerina
boolean cond = true;
int x5 = 1;
int y5 = 10;
int b11 = cond? cond? cond? y5 : x5 : x5 : x5; // not support
int b12 = cond? cond? (cond? y5 : x5) : x5 : x5; // support
```
**Affected Versions:**
SL Beta3
|
1.0
|
Parser does not support nested ternary expressions - **Description:**
$title.
**Steps to reproduce:**
```ballerina
boolean cond = true;
int x5 = 1;
int y5 = 10;
int b11 = cond? cond? cond? y5 : x5 : x5 : x5; // not support
int b12 = cond? cond? (cond? y5 : x5) : x5 : x5; // support
```
**Affected Versions:**
SL Beta3
|
priority
|
parser does not support nested ternary expressions description title steps to reproduce ballerina boolean cond true int int int cond cond cond not support int cond cond cond support affected versions sl
| 1
|
208,488
| 7,155,060,350
|
IssuesEvent
|
2018-01-26 11:04:54
|
metasfresh/metasfresh-webui-frontend
|
https://api.github.com/repos/metasfresh/metasfresh-webui-frontend
|
closed
|
editable views: sometimes the view's editable fields are PATCHed after the view was DELETEd
|
priority:high type:bug
|
### Is this a bug or feature request?
### What is the current behavior?
#### Which are the steps to reproduce?
* open sales order:
* select some lines and call "Create purchase order"
* press OK
When pressing OK, the frontend shall send back all the changed fields and then it shall delete the view.
Sometimes (NOT always!) this happens in reversed order (i.e. first the view is deleted) which ofc will trigger an issue on backend side.
### What is the expected or desired behavior?
ALWAYS, do the view patching BEFORE deleting the view.
|
1.0
|
editable views: sometimes the view's editable fields are PATCHed after the view was DELETEd - ### Is this a bug or feature request?
### What is the current behavior?
#### Which are the steps to reproduce?
* open sales order:
* select some lines and call "Create purchase order"
* press OK
When pressing OK, the frontend shall send back all the changed fields and then it shall delete the view.
Sometimes (NOT always!) this happens in reversed order (i.e. first the view is deleted) which ofc will trigger an issue on backend side.
### What is the expected or desired behavior?
ALWAYS, do the view patching BEFORE deleting the view.
|
priority
|
editable views sometimes the view s editable fields are patched after the view was deleted is this a bug or feature request what is the current behavior which are the steps to reproduce open sales order select some lines and call create purchase order press ok when pressing ok the frontend shall send back all the changed fields and then it shall delete the view sometimes not always this happens in reversed order i e first the view is deleted which ofc will trigger an issue on backend side what is the expected or desired behavior always do the view patching before deleting the view
| 1
|
788,224
| 27,747,752,386
|
IssuesEvent
|
2023-03-15 18:14:12
|
AY2223S2-CS2103T-T13-2/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-T13-2/tp
|
closed
|
V1.2: Refactor Parser for "Edit", "Delete", "Find", "Help" and their commands
|
type.Enhancement priority.High
|
Change functionality to target our new Recipe model
|
1.0
|
V1.2: Refactor Parser for "Edit", "Delete", "Find", "Help" and their commands - Change functionality to target our new Recipe model
|
priority
|
refactor parser for edit delete find help and their commands change functionality to target our new recipe model
| 1
|
23,670
| 2,660,217,751
|
IssuesEvent
|
2015-03-19 04:02:42
|
cs2103jan2015-t11-2c/main
|
https://api.github.com/repos/cs2103jan2015-t11-2c/main
|
closed
|
UI: Create Main Menu
|
priority.high type.task
|
_From @limtheckyee on March 11, 2015 7:48_
_Copied from original issue: jasqxl/cs2103jan2015-t11-2c#17_
|
1.0
|
UI: Create Main Menu - _From @limtheckyee on March 11, 2015 7:48_
_Copied from original issue: jasqxl/cs2103jan2015-t11-2c#17_
|
priority
|
ui create main menu from limtheckyee on march copied from original issue jasqxl
| 1
|
473,642
| 13,645,366,187
|
IssuesEvent
|
2020-09-25 20:38:52
|
spacetelescope/mirage
|
https://api.github.com/repos/spacetelescope/mirage
|
closed
|
Linearized darks for the 2 FGS detectors are mixed in dark_prep
|
Bug High Priority dark_prep
|
For FGS exposures that contain more than one integration, it is possible for Mirage to use a guider1 dark for a guider2 exposures and vice versa. This is because the list of possible darks is a simple glob of the fits files in the dark directory of MIRAGE_DATA. For exposures with a single integration, a dark from the correct detector will be used because there is logic in the yaml_generator to separate the darks from the two detectors. This logic needs to be added to dark_prep when it creates its lindark_list.
|
1.0
|
Linearized darks for the 2 FGS detectors are mixed in dark_prep - For FGS exposures that contain more than one integration, it is possible for Mirage to use a guider1 dark for a guider2 exposures and vice versa. This is because the list of possible darks is a simple glob of the fits files in the dark directory of MIRAGE_DATA. For exposures with a single integration, a dark from the correct detector will be used because there is logic in the yaml_generator to separate the darks from the two detectors. This logic needs to be added to dark_prep when it creates its lindark_list.
|
priority
|
linearized darks for the fgs detectors are mixed in dark prep for fgs exposures that contain more than one integration it is possible for mirage to use a dark for a exposures and vice versa this is because the list of possible darks is a simple glob of the fits files in the dark directory of mirage data for exposures with a single integration a dark from the correct detector will be used because there is logic in the yaml generator to separate the darks from the two detectors this logic needs to be added to dark prep when it creates its lindark list
| 1
|
65,724
| 3,238,070,655
|
IssuesEvent
|
2015-10-14 14:46:52
|
neuropoly/spinalcordtoolbox_web
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox_web
|
opened
|
create an iSCT version that works with minimal functionalities
|
bug priority: high
|
Current problems are:
- not possible to upload data
|
1.0
|
create an iSCT version that works with minimal functionalities - Current problems are:
- not possible to upload data
|
priority
|
create an isct version that works with minimal functionalities current problems are not possible to upload data
| 1
|
498,276
| 14,404,978,507
|
IssuesEvent
|
2020-12-03 18:01:49
|
DistrictDataLabs/yellowbrick
|
https://api.github.com/repos/DistrictDataLabs/yellowbrick
|
closed
|
Yellowbrick v1.2 Conda Package
|
priority: high type: task
|
Version 1.2 has been released, once it's been verified we need to [upload it to conda](http://www.scikit-yb.org/en/develop/contributing/advanced_development_topics.html#deploying-to-anaconda-cloud)!
|
1.0
|
Yellowbrick v1.2 Conda Package - Version 1.2 has been released, once it's been verified we need to [upload it to conda](http://www.scikit-yb.org/en/develop/contributing/advanced_development_topics.html#deploying-to-anaconda-cloud)!
|
priority
|
yellowbrick conda package version has been released once it s been verified we need to
| 1
|
428,788
| 12,416,805,692
|
IssuesEvent
|
2020-05-22 19:03:23
|
juntofoundation/junto-mobile
|
https://api.github.com/repos/juntofoundation/junto-mobile
|
closed
|
Various notification caching optimizations
|
High Priority
|
So, when you receive a pack request you will get both a connection and a pack request notification in your inbox. What we have set up right now is that responding to one of those requests deletes only that particular notification from cache. However, in the case of the pack request, if I accept it *before* I respond to the connection request, it removes both of them from the API (since joining someone's pack automatically connects you with them). The issue we have then is that the connection request remains in cache as only the pack request in removed.
Would it be better for us to simply have one 'update notification cache' function to use? @orestesgaolin I know you're working on this for removing comment notifications, and I wonder if we should just implement this for all areas that need to update cache to reduce the amount of edge cases we need to account for and ensure we're always up to date with what the API returns. We won't ever have more than 100 notifs so I don't think it would be a big performance issue.
@Nash0x7E2 if this is the route we take then we could just implement this function when responding to requests in the relations drawer + packs requests in the Packs section of the app.
|
1.0
|
Various notification caching optimizations - So, when you receive a pack request you will get both a connection and a pack request notification in your inbox. What we have set up right now is that responding to one of those requests deletes only that particular notification from cache. However, in the case of the pack request, if I accept it *before* I respond to the connection request, it removes both of them from the API (since joining someone's pack automatically connects you with them). The issue we have then is that the connection request remains in cache as only the pack request in removed.
Would it be better for us to simply have one 'update notification cache' function to use? @orestesgaolin I know you're working on this for removing comment notifications, and I wonder if we should just implement this for all areas that need to update cache to reduce the amount of edge cases we need to account for and ensure we're always up to date with what the API returns. We won't ever have more than 100 notifs so I don't think it would be a big performance issue.
@Nash0x7E2 if this is the route we take then we could just implement this function when responding to requests in the relations drawer + packs requests in the Packs section of the app.
|
priority
|
various notification caching optimizations so when you receive a pack request you will get both a connection and a pack request notification in your inbox what we have set up right now is that responding to one of those requests deletes only that particular notification from cache however in the case of the pack request if i accept it before i respond to the connection request it removes both of them from the api since joining someone s pack automatically connects you with them the issue we have then is that the connection request remains in cache as only the pack request in removed would it be better for us to simply have one update notification cache function to use orestesgaolin i know you re working on this for removing comment notifications and i wonder if we should just implement this for all areas that need to update cache to reduce the amount of edge cases we need to account for and ensure we re always up to date with what the api returns we won t ever have more than notifs so i don t think it would be a big performance issue if this is the route we take then we could just implement this function when responding to requests in the relations drawer packs requests in the packs section of the app
| 1
|
167,058
| 6,331,572,780
|
IssuesEvent
|
2017-07-26 10:16:34
|
CraftAcademy/ca_course
|
https://api.github.com/repos/CraftAcademy/ca_course
|
opened
|
Create screencasts for week 5
|
ca-course course material high priority ready
|
We need screencasts on the following topics. Add more if I missed something.
- [ ] Rails routing
- [ ] Active Record basics
- [ ] Rails Helpers
- [ ] Nested Routes (I have a recording of this, will check it out) - Might need to make a short one though.. 🤔
|
1.0
|
Create screencasts for week 5 - We need screencasts on the following topics. Add more if I missed something.
- [ ] Rails routing
- [ ] Active Record basics
- [ ] Rails Helpers
- [ ] Nested Routes (I have a recording of this, will check it out) - Might need to make a short one though.. 🤔
|
priority
|
create screencasts for week we need screencasts on the following topics add more if i missed something rails routing active record basics rails helpers nested routes i have a recording of this will check it out might need to make a short one though 🤔
| 1
|
239,949
| 7,800,186,756
|
IssuesEvent
|
2018-06-09 06:06:52
|
MrBlizzard/RCAdmins-Tracker
|
https://api.github.com/repos/MrBlizzard/RCAdmins-Tracker
|
closed
|
[RCV] Constant/Common RCV Rollbacks
|
awaiting developer bug priority:high
|
RC Vaults don't appear to save on open/close and are subject to rollback without warning.
|
1.0
|
[RCV] Constant/Common RCV Rollbacks - RC Vaults don't appear to save on open/close and are subject to rollback without warning.
|
priority
|
constant common rcv rollbacks rc vaults don t appear to save on open close and are subject to rollback without warning
| 1
|
202,548
| 7,048,884,364
|
IssuesEvent
|
2018-01-02 19:40:06
|
unfoldingWord-dev/translationCore
|
https://api.github.com/repos/unfoldingWord-dev/translationCore
|
closed
|
Need a build with full NT for alignment
|
Epic Priority/High QA/Pass
|
- [ ] The aligners need a stable build that has the whole NT so that they can keep working on aligning the NT.
- [ ] Go back to before the project refactor, create a release branch (0.8.1)
- [ ] Remove the Titus restriction
|
1.0
|
Need a build with full NT for alignment - - [ ] The aligners need a stable build that has the whole NT so that they can keep working on aligning the NT.
- [ ] Go back to before the project refactor, create a release branch (0.8.1)
- [ ] Remove the Titus restriction
|
priority
|
need a build with full nt for alignment the aligners need a stable build that has the whole nt so that they can keep working on aligning the nt go back to before the project refactor create a release branch remove the titus restriction
| 1
|
230,790
| 7,613,982,873
|
IssuesEvent
|
2018-05-01 23:53:23
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Version 7.4 stagged
|
High Priority
|
Machinist table is not designed so we can't craft things like gearbox. that mean we can't destroy meteor....
|
1.0
|
Version 7.4 stagged - Machinist table is not designed so we can't craft things like gearbox. that mean we can't destroy meteor....
|
priority
|
version stagged machinist table is not designed so we can t craft things like gearbox that mean we can t destroy meteor
| 1
|
49,159
| 3,001,742,965
|
IssuesEvent
|
2015-07-24 13:29:52
|
centreon/centreon
|
https://api.github.com/repos/centreon/centreon
|
closed
|
[Hosts] Can't remove last Template
|
Category: Centreon - Configuration Component: Affect Version Component: Resolution Priority: High Status: Rejected Tracker: Bug
|
---
Author Name: **Florian Asche** (Florian Asche)
Original Redmine Issue: 5368, https://forge.centreon.com/issues/5368
Original Date: 2014-03-15
Original Assignee: remi werquin
---
Hello,
if i want to delete the last template that is associated with a host, centreon didnt delete it when saving.
|
1.0
|
[Hosts] Can't remove last Template - ---
Author Name: **Florian Asche** (Florian Asche)
Original Redmine Issue: 5368, https://forge.centreon.com/issues/5368
Original Date: 2014-03-15
Original Assignee: remi werquin
---
Hello,
if i want to delete the last template that is associated with a host, centreon didnt delete it when saving.
|
priority
|
can t remove last template author name florian asche florian asche original redmine issue original date original assignee remi werquin hello if i want to delete the last template that is associated with a host centreon didnt delete it when saving
| 1
|
112,475
| 4,533,240,301
|
IssuesEvent
|
2016-09-08 10:50:00
|
japanesemediamanager/MyAnime3
|
https://api.github.com/repos/japanesemediamanager/MyAnime3
|
closed
|
Resume video file doesn't work, plays from beginning always
|
Bug - High Priority
|
**Reported by ignaciogarciaaguirre, Oct 22, 2013**
*What steps will reproduce the problem?*
1. Play video.
2. Stop video.
3. Replay video.
*What is the expected output? What do you see instead?*
Option to resume or start from the beginning.
Starts from the beginning always.
*What version of the product are you using? On what operating system?*
MA3 3.1.32.0
MediaPortal 1.5.0 Release
Titan Skin
```
00000005 - 22-10-2013 12:26:43 - ImageLoad: Finished
00000001 - 22-10-2013 12:26:43 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:26:43 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000024 - 22-10-2013 12:26:43 - GOT FANART details in: 5.0003 ms (.hack//G.U. Returner)
00000024 - 22-10-2013 12:26:43 - LOADING FANART: .hack//G.U. Returner - C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs\Anime3\TvDB\fanart\original\79099-10.jpg
00000024 - 22-10-2013 12:26:43 - Report ItemToAutoSelect: 0
00000024 - 22-10-2013 12:26:43 - SetFacade List Mode: Episode
00000024 - 22-10-2013 12:26:43 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:26:43 - [Anime2:\anime3_DVDDIVX.png]
00000001 - 22-10-2013 12:26:51 - Selected to play: 1 - OVA
00000001 - 22-10-2013 12:26:51 - Filetoplay: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:26:51 - Getting time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:26:51 - Time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 0
00000001 - 22-10-2013 12:26:52 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:26:52 - Playback started for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:26 - OnPlayBackStopped: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 33 - Video
00000001 - 22-10-2013 12:27:26 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:26 - Playback stopped for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:26 - Checking for set watched
00000001 - 22-10-2013 12:27:26 - Starting page load...
00000001 - 22-10-2013 12:27:26 - Adding hook to hook event handler
00000001 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:27:26 - C:\ProgramData\Team MediaPortal\MediaPortal\Skin\Titan\Anime3_SkinSettings.xml
00000001 - 22-10-2013 12:27:26 - Loading Logos
00000001 - 22-10-2013 12:27:26 - Thumbs Setting Folder: C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs
00000001 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000024 - 22-10-2013 12:27:26 - GOT FANART details in: 4.0002 ms (.hack//G.U. Returner)
00000024 - 22-10-2013 12:27:26 - LOADING FANART: .hack//G.U. Returner - C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs\Anime3\TvDB\fanart\original\79099-5.jpg
00000024 - 22-10-2013 12:27:26 - Report ItemToAutoSelect: 0
00000024 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000024 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:27:32 - Selected to play: 1 - OVA
00000001 - 22-10-2013 12:27:32 - Filetoplay: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:32 - Getting time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:32 - Time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 0
00000001 - 22-10-2013 12:27:32 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:32 - Playback started for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:36 - OnPlayBackStopped: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 3 - Video
00000001 - 22-10-2013 12:27:36 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:36 - Playback stopped for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
```
|
1.0
|
Resume video file doesn't work, plays from beginning always - **Reported by ignaciogarciaaguirre, Oct 22, 2013**
*What steps will reproduce the problem?*
1. Play video.
2. Stop video.
3. Replay video.
*What is the expected output? What do you see instead?*
Option to resume or start from the beginning.
Starts from the beginning always.
*What version of the product are you using? On what operating system?*
MA3 3.1.32.0
MediaPortal 1.5.0 Release
Titan Skin
```
00000005 - 22-10-2013 12:26:43 - ImageLoad: Finished
00000001 - 22-10-2013 12:26:43 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:26:43 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000024 - 22-10-2013 12:26:43 - GOT FANART details in: 5.0003 ms (.hack//G.U. Returner)
00000024 - 22-10-2013 12:26:43 - LOADING FANART: .hack//G.U. Returner - C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs\Anime3\TvDB\fanart\original\79099-10.jpg
00000024 - 22-10-2013 12:26:43 - Report ItemToAutoSelect: 0
00000024 - 22-10-2013 12:26:43 - SetFacade List Mode: Episode
00000024 - 22-10-2013 12:26:43 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:26:43 - [Anime2:\anime3_DVDDIVX.png]
00000001 - 22-10-2013 12:26:51 - Selected to play: 1 - OVA
00000001 - 22-10-2013 12:26:51 - Filetoplay: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:26:51 - Getting time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:26:51 - Time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 0
00000001 - 22-10-2013 12:26:52 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:26:52 - Playback started for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:26 - OnPlayBackStopped: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 33 - Video
00000001 - 22-10-2013 12:27:26 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:26 - Playback stopped for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:26 - Checking for set watched
00000001 - 22-10-2013 12:27:26 - Starting page load...
00000001 - 22-10-2013 12:27:26 - Adding hook to hook event handler
00000001 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:27:26 - C:\ProgramData\Team MediaPortal\MediaPortal\Skin\Titan\Anime3_SkinSettings.xml
00000001 - 22-10-2013 12:27:26 - Loading Logos
00000001 - 22-10-2013 12:27:26 - Thumbs Setting Folder: C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs
00000001 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000001 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000024 - 22-10-2013 12:27:26 - GOT FANART details in: 4.0002 ms (.hack//G.U. Returner)
00000024 - 22-10-2013 12:27:26 - LOADING FANART: .hack//G.U. Returner - C:\ProgramData\Team MediaPortal\MediaPortal\Thumbs\Anime3\TvDB\fanart\original\79099-5.jpg
00000024 - 22-10-2013 12:27:26 - Report ItemToAutoSelect: 0
00000024 - 22-10-2013 12:27:26 - SetFacade List Mode: Episode
00000024 - 22-10-2013 12:27:26 - SetFacade: Filters: False - Groups: False - Series: False - Episodes: True
00000001 - 22-10-2013 12:27:32 - Selected to play: 1 - OVA
00000001 - 22-10-2013 12:27:32 - Filetoplay: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:32 - Getting time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:32 - Time stopped for : Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 0
00000001 - 22-10-2013 12:27:32 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:32 - Playback started for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
00000001 - 22-10-2013 12:27:36 - OnPlayBackStopped: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - 3 - Video
00000001 - 22-10-2013 12:27:36 - PlayBackOpIsOfConcern: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv - Video - MyAnimePlugin3.ViewModel.AnimeEpisodeVM
00000001 - 22-10-2013 12:27:36 - Playback stopped for: Y:\anime\.hackG.U. Returner\[LuPerry]_dot_hack_GU_-_returner[496A700E].mkv
```
|
priority
|
resume video file doesn t work plays from beginning always reported by ignaciogarciaaguirre oct what steps will reproduce the problem play video stop video replay video what is the expected output what do you see instead option to resume or start from the beginning starts from the beginning always what version of the product are you using on what operating system mediaportal release titan skin imageload finished setfacade list mode episode setfacade filters false groups false series false episodes true got fanart details in ms hack g u returner loading fanart hack g u returner c programdata team mediaportal mediaportal thumbs tvdb fanart original jpg report itemtoautoselect setfacade list mode episode setfacade filters false groups false series false episodes true selected to play ova filetoplay y anime hackg u returner dot hack gu returner mkv getting time stopped for y anime hackg u returner dot hack gu returner mkv time stopped for y anime hackg u returner dot hack gu returner mkv playbackopisofconcern y anime hackg u returner dot hack gu returner mkv video viewmodel animeepisodevm playback started for y anime hackg u returner dot hack gu returner mkv onplaybackstopped y anime hackg u returner dot hack gu returner mkv video playbackopisofconcern y anime hackg u returner dot hack gu returner mkv video viewmodel animeepisodevm playback stopped for y anime hackg u returner dot hack gu returner mkv checking for set watched starting page load adding hook to hook event handler setfacade list mode episode setfacade filters false groups false series false episodes true c programdata team mediaportal mediaportal skin titan skinsettings xml loading logos thumbs setting folder c programdata team mediaportal mediaportal thumbs setfacade list mode episode setfacade filters false groups false series false episodes true got fanart details in ms hack g u returner loading fanart hack g u returner c programdata team mediaportal mediaportal thumbs tvdb fanart original jpg report itemtoautoselect setfacade list mode episode setfacade filters false groups false series false episodes true selected to play ova filetoplay y anime hackg u returner dot hack gu returner mkv getting time stopped for y anime hackg u returner dot hack gu returner mkv time stopped for y anime hackg u returner dot hack gu returner mkv playbackopisofconcern y anime hackg u returner dot hack gu returner mkv video viewmodel animeepisodevm playback started for y anime hackg u returner dot hack gu returner mkv onplaybackstopped y anime hackg u returner dot hack gu returner mkv video playbackopisofconcern y anime hackg u returner dot hack gu returner mkv video viewmodel animeepisodevm playback stopped for y anime hackg u returner dot hack gu returner mkv
| 1
|
650,972
| 21,445,652,377
|
IssuesEvent
|
2022-04-25 05:54:50
|
rosekamallove/youtemy
|
https://api.github.com/repos/rosekamallove/youtemy
|
opened
|
Create: Landing Page
|
enhancement high-priority
|
Currently, when a user is not logged in, it goes to the landing page which doesn't have anything on it other than the SignIn and Contribute Button.
We need to create a beautiful Landing Page containing a Call to Action and possibly animations explaining the various features in our web app.
|
1.0
|
Create: Landing Page - Currently, when a user is not logged in, it goes to the landing page which doesn't have anything on it other than the SignIn and Contribute Button.
We need to create a beautiful Landing Page containing a Call to Action and possibly animations explaining the various features in our web app.
|
priority
|
create landing page currently when a user is not logged in it goes to the landing page which doesn t have anything on it other than the signin and contribute button we need to create a beautiful landing page containing a call to action and possibly animations explaining the various features in our web app
| 1
|
479,327
| 13,794,788,946
|
IssuesEvent
|
2020-10-09 16:53:14
|
ooni/explorer
|
https://api.github.com/repos/ooni/explorer
|
closed
|
Some measurements display the ASN in the report ID / OONI Explorer measurement URLs, but not in the raw data
|
bug effort/XL interrupt priority/high
|
In some measurements (for example in WhatsApp and Telegram test results) the ASN is annotated as AS0, but it is possible to retrieve the actual ASN from OONI Explorer measurement URLs.
For example, in this measurement I can see from the measurement URL that the ASN is AS57293 (even though it is annotated as AS0 in the raw data): https://explorer.ooni.org/measurement/20200927T053618Z_AS57293_SZU6APrRIoL4pcWrIwmACx7ewQRL5pCqy5tEztueu4Tc5THkeX
Why does the measurement say AS0, while an ASN is displayed in the measurement URL?
|
1.0
|
Some measurements display the ASN in the report ID / OONI Explorer measurement URLs, but not in the raw data - In some measurements (for example in WhatsApp and Telegram test results) the ASN is annotated as AS0, but it is possible to retrieve the actual ASN from OONI Explorer measurement URLs.
For example, in this measurement I can see from the measurement URL that the ASN is AS57293 (even though it is annotated as AS0 in the raw data): https://explorer.ooni.org/measurement/20200927T053618Z_AS57293_SZU6APrRIoL4pcWrIwmACx7ewQRL5pCqy5tEztueu4Tc5THkeX
Why does the measurement say AS0, while an ASN is displayed in the measurement URL?
|
priority
|
some measurements display the asn in the report id ooni explorer measurement urls but not in the raw data in some measurements for example in whatsapp and telegram test results the asn is annotated as but it is possible to retrieve the actual asn from ooni explorer measurement urls for example in this measurement i can see from the measurement url that the asn is even though it is annotated as in the raw data why does the measurement say while an asn is displayed in the measurement url
| 1
|
510,275
| 14,787,764,872
|
IssuesEvent
|
2021-01-12 08:11:37
|
Disfactory/Disfactory
|
https://api.github.com/repos/Disfactory/Disfactory
|
closed
|
後台輸出各縣市各display status的數量
|
Backend high priority
|
**Is your feature request related to a problem? Please describe.**
每個月地公會發月報告知大家各縣市進度,所以需要輸出這個數量
**Describe the solution you'd like**
可以輸出一個csv,以縣市為經,以display status為緯,
<img width="546" alt="截圖 2020-12-09 下午8 43 02" src="https://user-images.githubusercontent.com/60970217/101631353-2a3ee080-3a5f-11eb-955a-94a13ae8af17.png">
**Describe alternatives you've considered**
自己一個一個算QQ
|
1.0
|
後台輸出各縣市各display status的數量 - **Is your feature request related to a problem? Please describe.**
每個月地公會發月報告知大家各縣市進度,所以需要輸出這個數量
**Describe the solution you'd like**
可以輸出一個csv,以縣市為經,以display status為緯,
<img width="546" alt="截圖 2020-12-09 下午8 43 02" src="https://user-images.githubusercontent.com/60970217/101631353-2a3ee080-3a5f-11eb-955a-94a13ae8af17.png">
**Describe alternatives you've considered**
自己一個一個算QQ
|
priority
|
後台輸出各縣市各display status的數量 is your feature request related to a problem please describe 每個月地公會發月報告知大家各縣市進度,所以需要輸出這個數量 describe the solution you d like 可以輸出一個csv,以縣市為經,以display status為緯, img width alt 截圖 src describe alternatives you ve considered 自己一個一個算qq
| 1
|
29,401
| 2,715,484,832
|
IssuesEvent
|
2015-04-10 13:28:55
|
OpenConceptLab/oclapi
|
https://api.github.com/repos/OpenConceptLab/oclapi
|
opened
|
Setup celery environment on dev server / fabric scripts
|
enhancement high-priority
|
Celery scripts will be used to run exports as a background process.
Confirm design with Aaron.
|
1.0
|
Setup celery environment on dev server / fabric scripts - Celery scripts will be used to run exports as a background process.
Confirm design with Aaron.
|
priority
|
setup celery environment on dev server fabric scripts celery scripts will be used to run exports as a background process confirm design with aaron
| 1
|
720,359
| 24,789,410,327
|
IssuesEvent
|
2022-10-24 12:39:18
|
KinsonDigital/Velaptor
|
https://api.github.com/repos/KinsonDigital/Velaptor
|
closed
|
🚧Build app settings system
|
✨new feature high priority preview
|
### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Build an application settings system. This will be used to turn on and off various application behaviors. The system should automatically check if app settings exist. If the application settings do not exist, the file will be auto-created and all application settings with default settings will be added to the file.
This should be set up as a service with the name `AppSettingsService` with an interface and full unit testing.
The settings will be simple key-value pairs in JSON format.
Create 2 app settings for the width and height of the application. Refactor/add code to make use of these 2 settings.
Add a method overload to the `App` class of the method `CreateWindow()`. This class will create an instance of the singleton app settings object and check if the settings exist. If the settings do not exist, use a default width and height of **1500 x 800**.
The default width and height should be constants in the `App` class.
Change the method used in the `Program.cs` file for the **VelaptorTesting** project to use the new `CreateWindow()` method overload.
### Acceptance Criteria
**This issue is finished when:**
- [x] The settings should be saved in JSON format
- [x] The check for file existence and creation with default settings should only be done one time upon creation of the service singleton. This means once upon application startup.
- If this was done every time, then that could be stress on the system by checking and loading file data.
- [x] ~The settings service should hold all of the app setting names and values as a `record` type.~
- [x] ~The settings service should have a method that returns the settings for evaluation and use by other parts of the application~
- [x] The settings service will be a singleton in the IoC container
- [x] Overload method named `CreateWindow()` added to the `App` class.
- This method will pull the window width and height app settings for users to set the size of the application.
- [x] The app settings are set up to be used as default instead of using hard-coded values using the `CreateWindow(unit, unit)` method
- [x] Unit tests added
- [x] All unit tests pass
### ToDo Items
- [x] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [x] Issue linked to the proper project
- [X] Issue linked to proper milestone
### Issue Dependencies
_No response_
### Related Work
- #247
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
🚧Build app settings system - ### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Build an application settings system. This will be used to turn on and off various application behaviors. The system should automatically check if app settings exist. If the application settings do not exist, the file will be auto-created and all application settings with default settings will be added to the file.
This should be set up as a service with the name `AppSettingsService` with an interface and full unit testing.
The settings will be simple key-value pairs in JSON format.
Create 2 app settings for the width and height of the application. Refactor/add code to make use of these 2 settings.
Add a method overload to the `App` class of the method `CreateWindow()`. This class will create an instance of the singleton app settings object and check if the settings exist. If the settings do not exist, use a default width and height of **1500 x 800**.
The default width and height should be constants in the `App` class.
Change the method used in the `Program.cs` file for the **VelaptorTesting** project to use the new `CreateWindow()` method overload.
### Acceptance Criteria
**This issue is finished when:**
- [x] The settings should be saved in JSON format
- [x] The check for file existence and creation with default settings should only be done one time upon creation of the service singleton. This means once upon application startup.
- If this was done every time, then that could be stress on the system by checking and loading file data.
- [x] ~The settings service should hold all of the app setting names and values as a `record` type.~
- [x] ~The settings service should have a method that returns the settings for evaluation and use by other parts of the application~
- [x] The settings service will be a singleton in the IoC container
- [x] Overload method named `CreateWindow()` added to the `App` class.
- This method will pull the window width and height app settings for users to set the size of the application.
- [x] The app settings are set up to be used as default instead of using hard-coded values using the `CreateWindow(unit, unit)` method
- [x] Unit tests added
- [x] All unit tests pass
### ToDo Items
- [x] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [x] Issue linked to the proper project
- [X] Issue linked to proper milestone
### Issue Dependencies
_No response_
### Related Work
- #247
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
priority
|
🚧build app settings system i have done the items below i have updated the title by replacing the section description build an application settings system this will be used to turn on and off various application behaviors the system should automatically check if app settings exist if the application settings do not exist the file will be auto created and all application settings with default settings will be added to the file this should be set up as a service with the name appsettingsservice with an interface and full unit testing the settings will be simple key value pairs in json format create app settings for the width and height of the application refactor add code to make use of these settings add a method overload to the app class of the method createwindow this class will create an instance of the singleton app settings object and check if the settings exist if the settings do not exist use a default width and height of x the default width and height should be constants in the app class change the method used in the program cs file for the velaptortesting project to use the new createwindow method overload acceptance criteria this issue is finished when the settings should be saved in json format the check for file existence and creation with default settings should only be done one time upon creation of the service singleton this means once upon application startup if this was done every time then that could be stress on the system by checking and loading file data the settings service should hold all of the app setting names and values as a record type the settings service should have a method that returns the settings for evaluation and use by other parts of the application the settings service will be a singleton in the ioc container overload method named createwindow added to the app class this method will pull the window width and height app settings for users to set the size of the application the app settings are set up to be used as default instead of using hard coded values using the createwindow unit unit method unit tests added all unit tests pass todo items draft pull request created and linked to this issue priority label added to issue low priority medium priority or high priority issue linked to the proper project issue linked to proper milestone issue dependencies no response related work code of conduct i agree to follow this project s code of conduct
| 1
|
317,339
| 9,663,585,451
|
IssuesEvent
|
2019-05-21 01:23:56
|
NCIOCPL/cgov-digital-platform
|
https://api.github.com/repos/NCIOCPL/cgov-digital-platform
|
closed
|
Metadata tags are missing for video and infographic
|
High priority
|
The metadata is missing from video and infographic pages. Please fix.
|
1.0
|
Metadata tags are missing for video and infographic - The metadata is missing from video and infographic pages. Please fix.
|
priority
|
metadata tags are missing for video and infographic the metadata is missing from video and infographic pages please fix
| 1
|
472,803
| 13,631,565,346
|
IssuesEvent
|
2020-09-24 18:12:56
|
cloudfour/lighthouse-parade
|
https://api.github.com/repos/cloudfour/lighthouse-parade
|
closed
|
Convert to TypeScript
|
High priority enhancement
|
- [x] Set up TS https://github.com/cloudfour/lighthouse-parade/pull/11
- [x] Convert to TS: `utilities` https://github.com/cloudfour/lighthouse-parade/pull/15
- [x] Convert to TS: `reportToRow` https://github.com/cloudfour/lighthouse-parade/pull/17
- [x] Convert to TS: `combine` https://github.com/cloudfour/lighthouse-parade/pull/19
- [ ] Convert to TS: `combine_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [x] Convert to TS: `lighthouse` https://github.com/cloudfour/lighthouse-parade/pull/20
- [ ] Convert to TS: `lighthouse_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [x] Convert to TS: `url_csv_maker` https://github.com/cloudfour/lighthouse-parade/pull/22
- [ ] Convert to TS: `scan_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [ ] Convert to TS: `urls_task` https://github.com/cloudfour/lighthouse-parade/pull/23
|
1.0
|
Convert to TypeScript - - [x] Set up TS https://github.com/cloudfour/lighthouse-parade/pull/11
- [x] Convert to TS: `utilities` https://github.com/cloudfour/lighthouse-parade/pull/15
- [x] Convert to TS: `reportToRow` https://github.com/cloudfour/lighthouse-parade/pull/17
- [x] Convert to TS: `combine` https://github.com/cloudfour/lighthouse-parade/pull/19
- [ ] Convert to TS: `combine_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [x] Convert to TS: `lighthouse` https://github.com/cloudfour/lighthouse-parade/pull/20
- [ ] Convert to TS: `lighthouse_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [x] Convert to TS: `url_csv_maker` https://github.com/cloudfour/lighthouse-parade/pull/22
- [ ] Convert to TS: `scan_task` https://github.com/cloudfour/lighthouse-parade/pull/23
- [ ] Convert to TS: `urls_task` https://github.com/cloudfour/lighthouse-parade/pull/23
|
priority
|
convert to typescript set up ts convert to ts utilities convert to ts reporttorow convert to ts combine convert to ts combine task convert to ts lighthouse convert to ts lighthouse task convert to ts url csv maker convert to ts scan task convert to ts urls task
| 1
|
228,350
| 7,550,007,123
|
IssuesEvent
|
2018-04-18 15:39:20
|
EyeSeeTea/QAApp
|
https://api.github.com/repos/EyeSeeTea/QAApp
|
reopened
|
Improve, feedback: - Indent feedback content
|
complexity - low (1hr) priority - high type - maintenance
|
Please, use different colors for the background of Questions and Feedback scripts. Maybe a different shade of grey? This will help users to distinguish between Questions and Feedback scripts more easily.
|
1.0
|
Improve, feedback: - Indent feedback content - Please, use different colors for the background of Questions and Feedback scripts. Maybe a different shade of grey? This will help users to distinguish between Questions and Feedback scripts more easily.
|
priority
|
improve feedback indent feedback content please use different colors for the background of questions and feedback scripts maybe a different shade of grey this will help users to distinguish between questions and feedback scripts more easily
| 1
|
783,219
| 27,523,136,638
|
IssuesEvent
|
2023-03-06 16:14:47
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
tests/bluetooth/bsim/audio unicast_audio broken in main blocking CI
|
bug priority: high area: Bluetooth area: Bluetooth Audio
|
**Describe the bug**
tests/bluetooth/bsim/audio unicast_audio is broken in main and is currently blocking CI
**To Reproduce**
Look at bluetooth's CI workflow
Or:
1. fetch latest main locally
2. tests/bluetooth/bsim/audio/compile.sh
3. tests/bluetooth/bsim/audio/test_scripts/unicast_audio.sh
**Expected behavior**
The test passes
**Impact**
CI blocked for BT tests
**Logs and console output**
https://github.com/zephyrproject-rtos/zephyr/actions/runs/4344560229/jobs/7588045210
**Environment (please complete the following information):**
- Zephyr's CI
- Local Linux host
|
1.0
|
tests/bluetooth/bsim/audio unicast_audio broken in main blocking CI - **Describe the bug**
tests/bluetooth/bsim/audio unicast_audio is broken in main and is currently blocking CI
**To Reproduce**
Look at bluetooth's CI workflow
Or:
1. fetch latest main locally
2. tests/bluetooth/bsim/audio/compile.sh
3. tests/bluetooth/bsim/audio/test_scripts/unicast_audio.sh
**Expected behavior**
The test passes
**Impact**
CI blocked for BT tests
**Logs and console output**
https://github.com/zephyrproject-rtos/zephyr/actions/runs/4344560229/jobs/7588045210
**Environment (please complete the following information):**
- Zephyr's CI
- Local Linux host
|
priority
|
tests bluetooth bsim audio unicast audio broken in main blocking ci describe the bug tests bluetooth bsim audio unicast audio is broken in main and is currently blocking ci to reproduce look at bluetooth s ci workflow or fetch latest main locally tests bluetooth bsim audio compile sh tests bluetooth bsim audio test scripts unicast audio sh expected behavior the test passes impact ci blocked for bt tests logs and console output environment please complete the following information zephyr s ci local linux host
| 1
|
162,324
| 6,150,882,429
|
IssuesEvent
|
2017-06-28 00:08:55
|
Codewars/codewars.com
|
https://api.github.com/repos/Codewars/codewars.com
|
closed
|
Codewars Red streak stats misbehaving again
|
bug high priority
|
For some days now my profile has said this --

-- even though I did some katas on the mornings (California time) of April 30, May 1, etc.
|
1.0
|
Codewars Red streak stats misbehaving again - For some days now my profile has said this --

-- even though I did some katas on the mornings (California time) of April 30, May 1, etc.
|
priority
|
codewars red streak stats misbehaving again for some days now my profile has said this even though i did some katas on the mornings california time of april may etc
| 1
|
480,244
| 13,838,412,012
|
IssuesEvent
|
2020-10-14 06:11:50
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
cse.google.com - design is broken
|
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
|
<!-- @browser: Firefox 82.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59848 -->
**URL**: https://cse.google.com/cse?q=cold+cough+treatment+via+ayurveda&sa=Search&ie=UTF-8&cx=partner%2Dpub-9491756922145733%3A4562159575#%9C
**Browser / Version**: Firefox 82.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/e197f967-c3c4-4c48-94d6-27668ed6974d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201008183927</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/18c77ede-2d14-461e-bfbc-d6c613049d92)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
cse.google.com - design is broken - <!-- @browser: Firefox 82.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59848 -->
**URL**: https://cse.google.com/cse?q=cold+cough+treatment+via+ayurveda&sa=Search&ie=UTF-8&cx=partner%2Dpub-9491756922145733%3A4562159575#%9C
**Browser / Version**: Firefox 82.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items are misaligned
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/e197f967-c3c4-4c48-94d6-27668ed6974d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201008183927</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/18c77ede-2d14-461e-bfbc-d6c613049d92)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
priority
|
cse google com design is broken url browser version firefox operating system windows tested another browser yes other problem type design is broken description items are misaligned steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel aurora hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 1
|
166,386
| 6,303,907,301
|
IssuesEvent
|
2017-07-21 14:46:15
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] Delete publishes items not published previously
|
bug Priority: High
|
Create a file and delete it right away, shows a publish of that item (why publish something that's never been published).
|
1.0
|
[studio] Delete publishes items not published previously - Create a file and delete it right away, shows a publish of that item (why publish something that's never been published).
|
priority
|
delete publishes items not published previously create a file and delete it right away shows a publish of that item why publish something that s never been published
| 1
|
199,937
| 6,996,271,047
|
IssuesEvent
|
2017-12-15 23:23:24
|
wolganens/edumpampa
|
https://api.github.com/repos/wolganens/edumpampa
|
opened
|
Modificar DNS do domínio edumpampa.mus.br
|
high priority
|
- @wolganens deve enviar dados de DNS do HEROKU;
- Amanda deve alterar dados no registro.br (pesquisar no GMAIL Hostmaster@registro.br)
|
1.0
|
Modificar DNS do domínio edumpampa.mus.br - - @wolganens deve enviar dados de DNS do HEROKU;
- Amanda deve alterar dados no registro.br (pesquisar no GMAIL Hostmaster@registro.br)
|
priority
|
modificar dns do domínio edumpampa mus br wolganens deve enviar dados de dns do heroku amanda deve alterar dados no registro br pesquisar no gmail hostmaster registro br
| 1
|
241,505
| 7,815,946,993
|
IssuesEvent
|
2018-06-13 01:29:02
|
WP-for-Church/Sermon-Manager
|
https://api.github.com/repos/WP-for-Church/Sermon-Manager
|
closed
|
Custom archive page slug no longer works
|
[Blocker] Cannot reproduce [Priority] High [Type] Bug
|
### Expected Behaviour
Setting "Archive Page Slug" should allow me to change the "sermons" part of the URL to "homily" or something else.
### Actual Behaviour
The archive page slug stays the same no matter what I change it to. Links to individual posts change, however - they just lead to a 404 page.
### Steps To Reproduce
1. Navigate to "Settings" under Sermon Manager
1. Change "Archive Page Slug" to any value
1. Navigate to sermon archive page
1. Click on a sermon
### Platform
**Sermon Manager Version:** 2.13.0
**WordPress Version:** 4.9.5
**PHP Version:** 7.1
|
1.0
|
Custom archive page slug no longer works - ### Expected Behaviour
Setting "Archive Page Slug" should allow me to change the "sermons" part of the URL to "homily" or something else.
### Actual Behaviour
The archive page slug stays the same no matter what I change it to. Links to individual posts change, however - they just lead to a 404 page.
### Steps To Reproduce
1. Navigate to "Settings" under Sermon Manager
1. Change "Archive Page Slug" to any value
1. Navigate to sermon archive page
1. Click on a sermon
### Platform
**Sermon Manager Version:** 2.13.0
**WordPress Version:** 4.9.5
**PHP Version:** 7.1
|
priority
|
custom archive page slug no longer works expected behaviour setting archive page slug should allow me to change the sermons part of the url to homily or something else actual behaviour the archive page slug stays the same no matter what i change it to links to individual posts change however they just lead to a page steps to reproduce navigate to settings under sermon manager change archive page slug to any value navigate to sermon archive page click on a sermon platform sermon manager version wordpress version php version
| 1
|
666,302
| 22,349,615,663
|
IssuesEvent
|
2022-06-15 10:50:10
|
codeklasse/codeklasse.de
|
https://api.github.com/repos/codeklasse/codeklasse.de
|
closed
|
Add copy for public sector
|
priority:high
|
I need to add the copy I wrote for interested parties from the public sector, and add a third button for it.
|
1.0
|
Add copy for public sector - I need to add the copy I wrote for interested parties from the public sector, and add a third button for it.
|
priority
|
add copy for public sector i need to add the copy i wrote for interested parties from the public sector and add a third button for it
| 1
|
815,039
| 30,533,950,533
|
IssuesEvent
|
2023-07-19 15:57:25
|
awslabs/aws-dataall
|
https://api.github.com/repos/awslabs/aws-dataall
|
closed
|
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key
|
type: enhancement status: in-review priority: high
|
### Describe the bug
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key. If the rotation is not an option then consider storing the secret under systems manager parameter store.
### How to Reproduce
```
*P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
```
### Expected behavior
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key. If the rotation is not an option then consider storing the secret under systems manager parameter store.
### Your project
_No response_
### Screenshots
_No response_
### OS
All
### Python version
3.1
### AWS data.all version
v1.3,v1.4,v1.5
### Additional context
As per security best practice for secrets management the above need to be met.
|
1.0
|
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key - ### Describe the bug
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key. If the rotation is not an option then consider storing the secret under systems manager parameter store.
### How to Reproduce
```
*P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
```
### Expected behavior
Encrypted Secrets in secret manager should be rotated atleast annually and encrypted with a KMS key. If the rotation is not an option then consider storing the secret under systems manager parameter store.
### Your project
_No response_
### Screenshots
_No response_
### OS
All
### Python version
3.1
### AWS data.all version
v1.3,v1.4,v1.5
### Additional context
As per security best practice for secrets management the above need to be met.
|
priority
|
encrypted secrets in secret manager should be rotated atleast annually and encrypted with a kms key describe the bug encrypted secrets in secret manager should be rotated atleast annually and encrypted with a kms key if the rotation is not an option then consider storing the secret under systems manager parameter store how to reproduce p s please do not attach files as it s considered a security risk add code snippets directly in the message body as much as possible expected behavior encrypted secrets in secret manager should be rotated atleast annually and encrypted with a kms key if the rotation is not an option then consider storing the secret under systems manager parameter store your project no response screenshots no response os all python version aws data all version additional context as per security best practice for secrets management the above need to be met
| 1
|
382,387
| 11,305,556,683
|
IssuesEvent
|
2020-01-18 06:45:01
|
cop4934-fall19-group32/Project-32
|
https://api.github.com/repos/cop4934-fall19-group32/Project-32
|
closed
|
UI DragNDrop Refactor
|
Priority:High Puzzles UI
|
The DragNDrop behavior of our instruction buttons is currently very fragile. It is difficult to extend behaviors for different instructions, and difficult to manage all the flags in the editor.
- [x] Refine DragNDrop behavior inheritance hierarchy
- [x] Create a more generic base script (no destruction on OnEndDrag)
~~- [ ] Create a subclass for generic instructions (input, output)~~
~~- [ ] Make this subclass implement the self-destruction on invalid drag behavior~~
- [x] Reduce the amount of manual set-up to maintain the system
- [x] Consider a method that eliminates the need for item slots
- [x] Consider having UI buttons scan the scene hierarchy to extract necessary references, as opposed to setting them manually in the editor
|
1.0
|
UI DragNDrop Refactor - The DragNDrop behavior of our instruction buttons is currently very fragile. It is difficult to extend behaviors for different instructions, and difficult to manage all the flags in the editor.
- [x] Refine DragNDrop behavior inheritance hierarchy
- [x] Create a more generic base script (no destruction on OnEndDrag)
~~- [ ] Create a subclass for generic instructions (input, output)~~
~~- [ ] Make this subclass implement the self-destruction on invalid drag behavior~~
- [x] Reduce the amount of manual set-up to maintain the system
- [x] Consider a method that eliminates the need for item slots
- [x] Consider having UI buttons scan the scene hierarchy to extract necessary references, as opposed to setting them manually in the editor
|
priority
|
ui dragndrop refactor the dragndrop behavior of our instruction buttons is currently very fragile it is difficult to extend behaviors for different instructions and difficult to manage all the flags in the editor refine dragndrop behavior inheritance hierarchy create a more generic base script no destruction on onenddrag create a subclass for generic instructions input output make this subclass implement the self destruction on invalid drag behavior reduce the amount of manual set up to maintain the system consider a method that eliminates the need for item slots consider having ui buttons scan the scene hierarchy to extract necessary references as opposed to setting them manually in the editor
| 1
|
214,713
| 7,276,144,001
|
IssuesEvent
|
2018-02-21 15:36:03
|
ballerina-lang/ballerina
|
https://api.github.com/repos/ballerina-lang/ballerina
|
closed
|
Positions are not highlighted for enum references
|
Priority/High Severity/Major Type/Bug component/language-server
|
**Description:**
Position calculation for the enum nodes are wrong from the location where the cursor currently is. This is because of a issue with the enum's position calculation from the ballerina core.
**Related Issues:**
https://github.com/ballerina-lang/ballerina/issues/4582
|
1.0
|
Positions are not highlighted for enum references - **Description:**
Position calculation for the enum nodes are wrong from the location where the cursor currently is. This is because of a issue with the enum's position calculation from the ballerina core.
**Related Issues:**
https://github.com/ballerina-lang/ballerina/issues/4582
|
priority
|
positions are not highlighted for enum references description position calculation for the enum nodes are wrong from the location where the cursor currently is this is because of a issue with the enum s position calculation from the ballerina core related issues
| 1
|
604,064
| 18,676,087,932
|
IssuesEvent
|
2021-10-31 15:33:58
|
AY2122S1-CS2103T-T11-1/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T11-1/tp
|
closed
|
[PE-D] Error message does not match the error when editing the email of a student
|
bug priority.High
|

`dawd+++a@dade` does fit the requirements of a valid email address, as stated in the given error message. However, an error still appears for this, and does not allow the user to edit the email address
<!--session: 1635494181576-c474c997-e998-4578-a94e-773d881fc99d-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: JunWei3112/ped#10
|
1.0
|
[PE-D] Error message does not match the error when editing the email of a student - 
`dawd+++a@dade` does fit the requirements of a valid email address, as stated in the given error message. However, an error still appears for this, and does not allow the user to edit the email address
<!--session: 1635494181576-c474c997-e998-4578-a94e-773d881fc99d-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: JunWei3112/ped#10
|
priority
|
error message does not match the error when editing the email of a student dawd a dade does fit the requirements of a valid email address as stated in the given error message however an error still appears for this and does not allow the user to edit the email address labels severity medium type functionalitybug original ped
| 1
|
426,695
| 12,378,061,305
|
IssuesEvent
|
2020-05-19 10:02:04
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
[ISSUE]scim2 patch user is returning wrong error message
|
Affected/5.8.0 Complexity/Medium Component/SCIM Priority/High Type/Bug WUM
|
**Describe the issue:**
When password history feature enabled SCIM Patch request to update the user's password giving the following response which has a generic error message.
`{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Error while updating attributes of user: nilasini","status":"500"}`
But int he console could able to see a detailed error message
Caused by: org.wso2.carbon.user.core.UserStoreException: This password has been used in recent history. Please choose a different password
**How to reproduce:**
1. Enable password history feature
2. Add a user
3. Send a SCIM patch request to update the password exceeding the password history count
**Expected behavior:**
SCIM response should have a detailed error message like "This password has been used in recent history. Please choose a different password"
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 5.8.0
- OS: Linux
- Database: H2
- Userstore: LDAP
---
### Optional Fields
**Related issues:**
<!-- Any related issues from this/other repositories-->
**Suggested labels:**
<!-- Only to be used by non-members -->
|
1.0
|
[ISSUE]scim2 patch user is returning wrong error message - **Describe the issue:**
When password history feature enabled SCIM Patch request to update the user's password giving the following response which has a generic error message.
`{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Error while updating attributes of user: nilasini","status":"500"}`
But int he console could able to see a detailed error message
Caused by: org.wso2.carbon.user.core.UserStoreException: This password has been used in recent history. Please choose a different password
**How to reproduce:**
1. Enable password history feature
2. Add a user
3. Send a SCIM patch request to update the password exceeding the password history count
**Expected behavior:**
SCIM response should have a detailed error message like "This password has been used in recent history. Please choose a different password"
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 5.8.0
- OS: Linux
- Database: H2
- Userstore: LDAP
---
### Optional Fields
**Related issues:**
<!-- Any related issues from this/other repositories-->
**Suggested labels:**
<!-- Only to be used by non-members -->
|
priority
|
patch user is returning wrong error message describe the issue when password history feature enabled scim patch request to update the user s password giving the following response which has a generic error message schemas detail error while updating attributes of user nilasini status but int he console could able to see a detailed error message caused by org carbon user core userstoreexception this password has been used in recent history please choose a different password how to reproduce enable password history feature add a user send a scim patch request to update the password exceeding the password history count expected behavior scim response should have a detailed error message like this password has been used in recent history please choose a different password environment information please complete the following information remove any unnecessary fields product version is os linux database userstore ldap optional fields related issues suggested labels
| 1
|
461,273
| 13,227,827,686
|
IssuesEvent
|
2020-08-18 04:26:41
|
discordextremelist/website
|
https://api.github.com/repos/discordextremelist/website
|
opened
|
black text in api docs
|
bug high priority
|
https://discordextremelist.xyz/en-US/docs#references
text is black
should be white
apparently nobody made a github issue about this yet
|
1.0
|
black text in api docs - https://discordextremelist.xyz/en-US/docs#references
text is black
should be white
apparently nobody made a github issue about this yet
|
priority
|
black text in api docs text is black should be white apparently nobody made a github issue about this yet
| 1
|
82,351
| 3,605,743,312
|
IssuesEvent
|
2016-02-04 07:41:14
|
supremefist/cnto-django-roster
|
https://api.github.com/repos/supremefist/cnto-django-roster
|
closed
|
Minimum attendance formula
|
high priority
|
As we've announced we'll reduce events per month from 12 to 8. We will also no longer distinguish Training from non-training events, all events are counted the same.
Could you please change minimum monthly event attendance requirement to any 2 events?
|
1.0
|
Minimum attendance formula - As we've announced we'll reduce events per month from 12 to 8. We will also no longer distinguish Training from non-training events, all events are counted the same.
Could you please change minimum monthly event attendance requirement to any 2 events?
|
priority
|
minimum attendance formula as we ve announced we ll reduce events per month from to we will also no longer distinguish training from non training events all events are counted the same could you please change minimum monthly event attendance requirement to any events
| 1
|
303,536
| 9,308,132,892
|
IssuesEvent
|
2019-03-25 13:57:09
|
codeforbtv/green-up-app
|
https://api.github.com/repos/codeforbtv/green-up-app
|
closed
|
Ask users for permission to use their email for marketing purposes
|
Priority: High Type: Enhancement
|
Handle the following 2 use cases:
- existing users that log back in - prompt them asking for permission
- new users registering
|
1.0
|
Ask users for permission to use their email for marketing purposes - Handle the following 2 use cases:
- existing users that log back in - prompt them asking for permission
- new users registering
|
priority
|
ask users for permission to use their email for marketing purposes handle the following use cases existing users that log back in prompt them asking for permission new users registering
| 1
|
731,000
| 25,197,664,768
|
IssuesEvent
|
2022-11-12 18:28:25
|
bounswe/bounswe2022group9
|
https://api.github.com/repos/bounswe/bounswe2022group9
|
closed
|
[Backend] Upgrade User Module
|
Priority: High In Progress Backend
|
Deadline: 13.11.2022 21:00
TODO:
- User model should be upgraded in order to show user information on profile page.
- A name field and a birthdate field should be added to user module.
|
1.0
|
[Backend] Upgrade User Module - Deadline: 13.11.2022 21:00
TODO:
- User model should be upgraded in order to show user information on profile page.
- A name field and a birthdate field should be added to user module.
|
priority
|
upgrade user module deadline todo user model should be upgraded in order to show user information on profile page a name field and a birthdate field should be added to user module
| 1
|
118,011
| 4,730,537,983
|
IssuesEvent
|
2016-10-18 21:56:49
|
FeraGroup/FTCVortexScoreCounter
|
https://api.github.com/repos/FeraGroup/FTCVortexScoreCounter
|
closed
|
Reset button not working
|
bug High Priority
|
I downloaded V0.1.2 I ran the program and pressed the A buttons to increase the number of points scored. When I press the reset button to make the score values return to zero, they do not reset. The values remain what they had been before pressing the button.
I have to close the app and relaunch to get the values back to zero.
|
1.0
|
Reset button not working - I downloaded V0.1.2 I ran the program and pressed the A buttons to increase the number of points scored. When I press the reset button to make the score values return to zero, they do not reset. The values remain what they had been before pressing the button.
I have to close the app and relaunch to get the values back to zero.
|
priority
|
reset button not working i downloaded i ran the program and pressed the a buttons to increase the number of points scored when i press the reset button to make the score values return to zero they do not reset the values remain what they had been before pressing the button i have to close the app and relaunch to get the values back to zero
| 1
|
538,358
| 15,767,444,149
|
IssuesEvent
|
2021-03-31 16:05:36
|
deepsourcelabs/good-first-issue
|
https://api.github.com/repos/deepsourcelabs/good-first-issue
|
closed
|
Add details for local development
|
Priority:High Type:Feature stale
|
It is not clear how someone can set GFI up locally and start local development. There should also be fixtures so the user doesn't need to run the population themselves.
|
1.0
|
Add details for local development - It is not clear how someone can set GFI up locally and start local development. There should also be fixtures so the user doesn't need to run the population themselves.
|
priority
|
add details for local development it is not clear how someone can set gfi up locally and start local development there should also be fixtures so the user doesn t need to run the population themselves
| 1
|
318,252
| 9,684,128,295
|
IssuesEvent
|
2019-05-23 13:08:19
|
rapid-music-finder/rapid-shazam
|
https://api.github.com/repos/rapid-music-finder/rapid-shazam
|
closed
|
Album Image
|
Priority: High enhancement
|
Album Image should be displayed in designated area.
Take data from the AudD response.
|
1.0
|
Album Image - Album Image should be displayed in designated area.
Take data from the AudD response.
|
priority
|
album image album image should be displayed in designated area take data from the audd response
| 1
|
561,833
| 16,625,210,538
|
IssuesEvent
|
2021-06-03 08:42:01
|
eclipse-sirius/sirius-components
|
https://api.github.com/repos/eclipse-sirius/sirius-components
|
closed
|
Add support for layout mode (manual or auto) in the View DSL
|
area: backend difficulty: starter 👧 priority: high type: feature request 🤔
|
With https://github.com/eclipse-sirius/sirius-components/issues/503 each diagram description can now decide if it should be layouted automatically or manually. This new feature should be exposed in the View DSL (with a simple boolean/flag).
|
1.0
|
Add support for layout mode (manual or auto) in the View DSL - With https://github.com/eclipse-sirius/sirius-components/issues/503 each diagram description can now decide if it should be layouted automatically or manually. This new feature should be exposed in the View DSL (with a simple boolean/flag).
|
priority
|
add support for layout mode manual or auto in the view dsl with each diagram description can now decide if it should be layouted automatically or manually this new feature should be exposed in the view dsl with a simple boolean flag
| 1
|
444,553
| 12,814,256,921
|
IssuesEvent
|
2020-07-04 17:39:16
|
UC-Davis-molecular-computing/scadnano
|
https://api.github.com/repos/UC-Davis-molecular-computing/scadnano
|
closed
|
set up automated deployment from `dev` into a dev site
|
enhancement high priority
|
For example, the dev site could be https://scadnano.org/dev, or https://dev.scadnano.org.
This will help us to keep the stable version more stable with less frequent updates (still involving merging `dev` into `master`), while still allowing people to test new features who do not want to install Dart and run a local scadnano server.
We should also do automated releases on the `dev` branch, so that people using the `dev` branch can see what features have been introduced.
This should be accompanied by a workflow where we make new branches for implementing most new features/bugfixes, which are then merged into `dev` with a pull request.
|
1.0
|
set up automated deployment from `dev` into a dev site - For example, the dev site could be https://scadnano.org/dev, or https://dev.scadnano.org.
This will help us to keep the stable version more stable with less frequent updates (still involving merging `dev` into `master`), while still allowing people to test new features who do not want to install Dart and run a local scadnano server.
We should also do automated releases on the `dev` branch, so that people using the `dev` branch can see what features have been introduced.
This should be accompanied by a workflow where we make new branches for implementing most new features/bugfixes, which are then merged into `dev` with a pull request.
|
priority
|
set up automated deployment from dev into a dev site for example the dev site could be or this will help us to keep the stable version more stable with less frequent updates still involving merging dev into master while still allowing people to test new features who do not want to install dart and run a local scadnano server we should also do automated releases on the dev branch so that people using the dev branch can see what features have been introduced this should be accompanied by a workflow where we make new branches for implementing most new features bugfixes which are then merged into dev with a pull request
| 1
|
171,165
| 6,480,411,563
|
IssuesEvent
|
2017-08-18 13:22:11
|
hypothesis/lti
|
https://api.github.com/repos/hypothesis/lti
|
opened
|
Spike to investigate how the Canvas app can securely pass config to the embedded client
|
Priority: High
|
See also:
* [Canvas users have to manually create and activate Hypothesis accounts](https://github.com/hypothesis/product-backlog/issues/342)
* [Canvas users have to manually create and join Hypothesis groups](Canvas users have to manually create and join Hypothesis groups)
If the Canvas app is going to automatically log the user into the Hypothesis client using (a Hypothesis third-party account based on) their Canvas account, and is also going to automatically make the user a member of and focus a private group for the Canvas course, then **the Canvas app somehow needs to pass that configuration (the user and group) to the Hypothesis client**.
[Canvas users have to manually create and activate Hypothesis accounts](https://github.com/hypothesis/product-backlog/issues/342) said simply "_This can probably be done by the Canvas app rendering a grant token into the document that's being annotated, the same as is done by eLife and the publisher test site_" but actually that doesn't sound like a good idea. eLife pages are trusted. But the Canvas app annotates arbitrary third-party pages. We can't just render a grant token that can be used to access a user's account into a random third-party page such that any third-party JavaScript code on the page could steal the grant token.
So we need to come up with a secure way for the client to pass this config.
One important question is: will this require changes to Via / require a new Via replacement because the Canvas app needs Via to pass config to the client for it? Ideally we would implement this without any changes to Via, because then implementing user and group integration for Canvas can be decoupled from replacing Via.
|
1.0
|
Spike to investigate how the Canvas app can securely pass config to the embedded client - See also:
* [Canvas users have to manually create and activate Hypothesis accounts](https://github.com/hypothesis/product-backlog/issues/342)
* [Canvas users have to manually create and join Hypothesis groups](Canvas users have to manually create and join Hypothesis groups)
If the Canvas app is going to automatically log the user into the Hypothesis client using (a Hypothesis third-party account based on) their Canvas account, and is also going to automatically make the user a member of and focus a private group for the Canvas course, then **the Canvas app somehow needs to pass that configuration (the user and group) to the Hypothesis client**.
[Canvas users have to manually create and activate Hypothesis accounts](https://github.com/hypothesis/product-backlog/issues/342) said simply "_This can probably be done by the Canvas app rendering a grant token into the document that's being annotated, the same as is done by eLife and the publisher test site_" but actually that doesn't sound like a good idea. eLife pages are trusted. But the Canvas app annotates arbitrary third-party pages. We can't just render a grant token that can be used to access a user's account into a random third-party page such that any third-party JavaScript code on the page could steal the grant token.
So we need to come up with a secure way for the client to pass this config.
One important question is: will this require changes to Via / require a new Via replacement because the Canvas app needs Via to pass config to the client for it? Ideally we would implement this without any changes to Via, because then implementing user and group integration for Canvas can be decoupled from replacing Via.
|
priority
|
spike to investigate how the canvas app can securely pass config to the embedded client see also canvas users have to manually create and join hypothesis groups if the canvas app is going to automatically log the user into the hypothesis client using a hypothesis third party account based on their canvas account and is also going to automatically make the user a member of and focus a private group for the canvas course then the canvas app somehow needs to pass that configuration the user and group to the hypothesis client said simply this can probably be done by the canvas app rendering a grant token into the document that s being annotated the same as is done by elife and the publisher test site but actually that doesn t sound like a good idea elife pages are trusted but the canvas app annotates arbitrary third party pages we can t just render a grant token that can be used to access a user s account into a random third party page such that any third party javascript code on the page could steal the grant token so we need to come up with a secure way for the client to pass this config one important question is will this require changes to via require a new via replacement because the canvas app needs via to pass config to the client for it ideally we would implement this without any changes to via because then implementing user and group integration for canvas can be decoupled from replacing via
| 1
|
371,890
| 10,987,143,583
|
IssuesEvent
|
2019-12-02 08:35:01
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
Hypervisor crash when run syz_ic_set_callback_vector.(1.0 Stable)
|
priority: P2-High type: bug
|
1.Environment
[Board]: APL UP2
root@clr-b1b5101306fd4a3a803cf1050b4893f0~ # swupd info
Installed version: 30440
root@clr-19296a3ecf5b4723adce369a5c1807d2~ # uname -a
Linux clr-19296a3ecf5b4723adce369a5c1807d2 4.19.40-quilt-2e5dc0ac-dirty #1 SMP PREEMPT Mon Jul 22 03:38:56 UTC 2019 x86_64 GNU/Linux
root@clr-19296a3ecf5b4723adce369a5c1807d2~ # acrn-dm -v
DM version is: 1.2-unstable-c1b4121e-dirty (daily tag:acrn-2019w29.4-140000p), build by root@2019-07-22 03:45:04
Tools setup wiki: https://wiki.ith.intel.com/display/OTCCWPQA/syzkaller+enabling+on+ACRN
We used Syzkaller ran with hypercall unit tests to do Fuzzing test for ACRN, which ran on SOS and communicate with DM process by socket.
"enable_syscalls":[ "syz_ic_inject_msi", "syz_ic_vm_intr_monitor", "syz_ic_set_irqline","syz_ic_sos_offline_cpu","syz_ic_set_callback_vector","syz_ic_clear_vm_ioreq" ],
2. Reproduce Steps
setup env with wiki: And sync latest ACRN code and fuzzing tool code to your host
apply patch_for_fuzzing_on_dm.txt to devicemodel, and build images
modify acrn_build.sh based your own environment, and run it to rebuild syzkaller tool
flash images, and then make uos autoboot, remove sos password, crashlogctl enable
use acrn.cfg (modify the ip to your own ip) to run syzkaller cases: ./bin/syz-manager -config=acrn.cfg --debug
3. Expected result:
Hypervisor not crashed not hang, and SUT works well
4. Current result:
After run: ./bin/syz-manager -config=acrn.cfg --debug
Hypervisor hang.
|
1.0
|
Hypervisor crash when run syz_ic_set_callback_vector.(1.0 Stable) - 1.Environment
[Board]: APL UP2
root@clr-b1b5101306fd4a3a803cf1050b4893f0~ # swupd info
Installed version: 30440
root@clr-19296a3ecf5b4723adce369a5c1807d2~ # uname -a
Linux clr-19296a3ecf5b4723adce369a5c1807d2 4.19.40-quilt-2e5dc0ac-dirty #1 SMP PREEMPT Mon Jul 22 03:38:56 UTC 2019 x86_64 GNU/Linux
root@clr-19296a3ecf5b4723adce369a5c1807d2~ # acrn-dm -v
DM version is: 1.2-unstable-c1b4121e-dirty (daily tag:acrn-2019w29.4-140000p), build by root@2019-07-22 03:45:04
Tools setup wiki: https://wiki.ith.intel.com/display/OTCCWPQA/syzkaller+enabling+on+ACRN
We used Syzkaller ran with hypercall unit tests to do Fuzzing test for ACRN, which ran on SOS and communicate with DM process by socket.
"enable_syscalls":[ "syz_ic_inject_msi", "syz_ic_vm_intr_monitor", "syz_ic_set_irqline","syz_ic_sos_offline_cpu","syz_ic_set_callback_vector","syz_ic_clear_vm_ioreq" ],
2. Reproduce Steps
setup env with wiki: And sync latest ACRN code and fuzzing tool code to your host
apply patch_for_fuzzing_on_dm.txt to devicemodel, and build images
modify acrn_build.sh based your own environment, and run it to rebuild syzkaller tool
flash images, and then make uos autoboot, remove sos password, crashlogctl enable
use acrn.cfg (modify the ip to your own ip) to run syzkaller cases: ./bin/syz-manager -config=acrn.cfg --debug
3. Expected result:
Hypervisor not crashed not hang, and SUT works well
4. Current result:
After run: ./bin/syz-manager -config=acrn.cfg --debug
Hypervisor hang.
|
priority
|
hypervisor crash when run syz ic set callback vector stable environment apl root clr swupd info installed version root clr uname a linux clr quilt dirty smp preempt mon jul utc gnu linux root clr acrn dm v dm version is unstable dirty daily tag acrn build by root tools setup wiki we used syzkaller ran with hypercall unit tests to do fuzzing test for acrn which ran on sos and communicate with dm process by socket enable syscalls reproduce steps setup env with wiki and sync latest acrn code and fuzzing tool code to your host apply patch for fuzzing on dm txt to devicemodel and build images modify acrn build sh based your own environment and run it to rebuild syzkaller tool flash images and then make uos autoboot remove sos password crashlogctl enable use acrn cfg modify the ip to your own ip to run syzkaller cases bin syz manager config acrn cfg debug expected result hypervisor not crashed not hang and sut works well current result after run bin syz manager config acrn cfg debug hypervisor hang
| 1
|
406,075
| 11,886,680,579
|
IssuesEvent
|
2020-03-27 22:38:43
|
CMPUT301W20T01/boost
|
https://api.github.com/repos/CMPUT301W20T01/boost
|
closed
|
UC 02.03.01 view request start and end on map
|
priority: low risk: high size: 2
|
**Partial User Story:**
_US 07.02.01_
As a driver, I want to view start and end geo-locations on a map for a request.
**Rationale:**
- To see where a rider wants to be picked up and dropped off so the driver can decide if they want to accept the ride request
---
**Notes:**
Display only two points: start and end.
Distinguish between start and end points visually.
|
1.0
|
UC 02.03.01 view request start and end on map - **Partial User Story:**
_US 07.02.01_
As a driver, I want to view start and end geo-locations on a map for a request.
**Rationale:**
- To see where a rider wants to be picked up and dropped off so the driver can decide if they want to accept the ride request
---
**Notes:**
Display only two points: start and end.
Distinguish between start and end points visually.
|
priority
|
uc view request start and end on map partial user story us as a driver i want to view start and end geo locations on a map for a request rationale to see where a rider wants to be picked up and dropped off so the driver can decide if they want to accept the ride request notes display only two points start and end distinguish between start and end points visually
| 1
|
633,886
| 20,269,220,653
|
IssuesEvent
|
2022-02-15 14:50:38
|
owid/covid-19-data
|
https://api.github.com/repos/owid/covid-19-data
|
opened
|
fix(vax,south_korea): spreadsheet format has changed
|
bug vaccinations priority:high
|
The spreadsheet has a new format with quite a few changes, so our current script no longer runs.
cc @minyoh, who previously fixed the script, in case some help is needed with translation :)
|
1.0
|
fix(vax,south_korea): spreadsheet format has changed - The spreadsheet has a new format with quite a few changes, so our current script no longer runs.
cc @minyoh, who previously fixed the script, in case some help is needed with translation :)
|
priority
|
fix vax south korea spreadsheet format has changed the spreadsheet has a new format with quite a few changes so our current script no longer runs cc minyoh who previously fixed the script in case some help is needed with translation
| 1
|
43,880
| 2,893,716,988
|
IssuesEvent
|
2015-06-15 19:26:35
|
SCIInstitute/ShapeWorksStudio
|
https://api.github.com/repos/SCIInstitute/ShapeWorksStudio
|
closed
|
Final release of Studio
|
High Priority IBBM
|
Windows, OSX, Linux. No new features after this. Only bugs important for IBBM.
|
1.0
|
Final release of Studio - Windows, OSX, Linux. No new features after this. Only bugs important for IBBM.
|
priority
|
final release of studio windows osx linux no new features after this only bugs important for ibbm
| 1
|
618,866
| 19,489,303,792
|
IssuesEvent
|
2021-12-27 01:21:35
|
CaptureCoop/SnipSniper
|
https://api.github.com/repos/CaptureCoop/SnipSniper
|
opened
|
Directly open a plain full size image when opening the editor via the tray-icon
|
enhancement Medium Priority High Priority for: Editor
|
Directly open a plain full size image when opening the editor via right click on the tray-icon.
|
2.0
|
Directly open a plain full size image when opening the editor via the tray-icon - Directly open a plain full size image when opening the editor via right click on the tray-icon.
|
priority
|
directly open a plain full size image when opening the editor via the tray icon directly open a plain full size image when opening the editor via right click on the tray icon
| 1
|
88,616
| 3,783,233,538
|
IssuesEvent
|
2016-03-19 01:16:44
|
ParliamentTree/parliamenttree
|
https://api.github.com/repos/ParliamentTree/parliamenttree
|
opened
|
Set up Travis automated tests
|
high-priority up-for-grabs webapp
|
To allow awesome code collaboration, we want to have automated tests here on GitHub. Let's set up Travis. See the setup in [Djangae](https://github.com/potatolondon/djangae) if you need a starting point.
|
1.0
|
Set up Travis automated tests - To allow awesome code collaboration, we want to have automated tests here on GitHub. Let's set up Travis. See the setup in [Djangae](https://github.com/potatolondon/djangae) if you need a starting point.
|
priority
|
set up travis automated tests to allow awesome code collaboration we want to have automated tests here on github let s set up travis see the setup in if you need a starting point
| 1
|
564,413
| 16,725,314,683
|
IssuesEvent
|
2021-06-10 12:20:16
|
bounswe/2021SpringGroup3
|
https://api.github.com/repos/bounswe/2021SpringGroup3
|
closed
|
See Posts by Community functionality should be fixed
|
Component: Frontend Priority: High Status: Completed Type: Bug
|
### There is a bug in see post by community functionality.
The page is not displaying posts. We can only see the header (Posts). There may be problems in retrieving or casting data.
Related Files:
- GetPostsViewController
- posts.html
Needed to be checked and fixed **as soon as possible**.
|
1.0
|
See Posts by Community functionality should be fixed - ### There is a bug in see post by community functionality.
The page is not displaying posts. We can only see the header (Posts). There may be problems in retrieving or casting data.
Related Files:
- GetPostsViewController
- posts.html
Needed to be checked and fixed **as soon as possible**.
|
priority
|
see posts by community functionality should be fixed there is a bug in see post by community functionality the page is not displaying posts we can only see the header posts there may be problems in retrieving or casting data related files getpostsviewcontroller posts html needed to be checked and fixed as soon as possible
| 1
|
289,001
| 8,853,830,407
|
IssuesEvent
|
2019-01-08 22:39:11
|
SpongePowered/Ore
|
https://api.github.com/repos/SpongePowered/Ore
|
closed
|
Creating a new project resets to beginning after "invite members" page
|
component: backend priority: high type: bug
|
Browser & Version: Chrome Canary - Version 69.0.3481.0 (Official Build) canary (64-bit)
Operating System: macOS High Sierra - Version 10.13.6 Beta and Windows 10 Build 17692
Error message (if applicable): While I am uploading a new project, after the invite members page, I get sent to the beginning of project creation again.
From here:
<img width="1493" alt="screen shot 2018-07-04 at 8 09 21 pm" src="https://user-images.githubusercontent.com/18372958/42297651-93f24e6e-7fc6-11e8-9d6f-5f45cb9f9fdf.png">
To here:
<img width="1473" alt="screen shot 2018-07-04 at 8 09 31 pm" src="https://user-images.githubusercontent.com/18372958/42297653-9585b72a-7fc6-11e8-9ea9-d77fc253d28e.png">
Steps to reproduce:
* Go to create a project
* Upload jar and sig
* Fill up necessary information(I chose IchorPowered as owner)
* Go to add member page(either add somebody or don't and click next)
The project I was trying to upload is here - https://github.com/ichorpowered/eggers/tree/v0.1.0
|
1.0
|
Creating a new project resets to beginning after "invite members" page - Browser & Version: Chrome Canary - Version 69.0.3481.0 (Official Build) canary (64-bit)
Operating System: macOS High Sierra - Version 10.13.6 Beta and Windows 10 Build 17692
Error message (if applicable): While I am uploading a new project, after the invite members page, I get sent to the beginning of project creation again.
From here:
<img width="1493" alt="screen shot 2018-07-04 at 8 09 21 pm" src="https://user-images.githubusercontent.com/18372958/42297651-93f24e6e-7fc6-11e8-9d6f-5f45cb9f9fdf.png">
To here:
<img width="1473" alt="screen shot 2018-07-04 at 8 09 31 pm" src="https://user-images.githubusercontent.com/18372958/42297653-9585b72a-7fc6-11e8-9ea9-d77fc253d28e.png">
Steps to reproduce:
* Go to create a project
* Upload jar and sig
* Fill up necessary information(I chose IchorPowered as owner)
* Go to add member page(either add somebody or don't and click next)
The project I was trying to upload is here - https://github.com/ichorpowered/eggers/tree/v0.1.0
|
priority
|
creating a new project resets to beginning after invite members page browser version chrome canary version official build canary bit operating system macos high sierra version beta and windows build error message if applicable while i am uploading a new project after the invite members page i get sent to the beginning of project creation again from here img width alt screen shot at pm src to here img width alt screen shot at pm src steps to reproduce go to create a project upload jar and sig fill up necessary information i chose ichorpowered as owner go to add member page either add somebody or don t and click next the project i was trying to upload is here
| 1
|
515,501
| 14,964,406,369
|
IssuesEvent
|
2021-01-27 11:56:12
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Search should still be possible from subjects panel page
|
High Level Priority
|
Return only is core published bricks of course
<img width="1025" alt="Screenshot 2021-01-27 at 11 58 07" src="https://user-images.githubusercontent.com/59654112/105982206-46422400-6097-11eb-8a13-b3b6b570d6ff.png">
|
1.0
|
Search should still be possible from subjects panel page - Return only is core published bricks of course
<img width="1025" alt="Screenshot 2021-01-27 at 11 58 07" src="https://user-images.githubusercontent.com/59654112/105982206-46422400-6097-11eb-8a13-b3b6b570d6ff.png">
|
priority
|
search should still be possible from subjects panel page return only is core published bricks of course img width alt screenshot at src
| 1
|
115,481
| 4,675,068,193
|
IssuesEvent
|
2016-10-07 05:38:48
|
bespokeinteractive/maternityapp
|
https://api.github.com/repos/bespokeinteractive/maternityapp
|
opened
|
Input validation for maternity module not done
|
bug High priority
|
Input validation for maternity module not done
Task
-----------
Ensure input validation for maternity module from maternity triage, delivery room
|
1.0
|
Input validation for maternity module not done - Input validation for maternity module not done
Task
-----------
Ensure input validation for maternity module from maternity triage, delivery room
|
priority
|
input validation for maternity module not done input validation for maternity module not done task ensure input validation for maternity module from maternity triage delivery room
| 1
|
287,511
| 8,816,400,889
|
IssuesEvent
|
2018-12-30 10:10:16
|
elysium-project/classic-bug-tracker
|
https://api.github.com/repos/elysium-project/classic-bug-tracker
|
closed
|
keyring/bag does not prevent you from making more then 12 inv slots of keys, newly made keys will start a fresh stack in the hidden 13th slot, player is unable to retrieve them.
|
Confirmed High Priority
|
Hi, yesterday i made arcanite skeleton keys , seen the message about receiving them but they never actually appeared in my bags
|
1.0
|
keyring/bag does not prevent you from making more then 12 inv slots of keys, newly made keys will start a fresh stack in the hidden 13th slot, player is unable to retrieve them. - Hi, yesterday i made arcanite skeleton keys , seen the message about receiving them but they never actually appeared in my bags
|
priority
|
keyring bag does not prevent you from making more then inv slots of keys newly made keys will start a fresh stack in the hidden slot player is unable to retrieve them hi yesterday i made arcanite skeleton keys seen the message about receiving them but they never actually appeared in my bags
| 1
|
682,328
| 23,341,245,500
|
IssuesEvent
|
2022-08-09 14:11:39
|
neo4j/graphql
|
https://api.github.com/repos/neo4j/graphql
|
closed
|
Cypher generation syntax error
|
confirmed bug report high priority
|
**Describe the bug**
Neo4j EE 4.4.5
Neo4j/GraphQL 3.4.0
Customer is using `@cypher` directive in a type. The generated cypher statement when executed produces a syntax error. Visibly you can see that words are concatenated together, such as:
```
WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`)
```
You can see there is no space between `AND` and `this`.
Customer upgraded neo4j/graphql to 3.6.1 and problem still exists.
**Type definitions**
```graphql
type ContentPiece @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
}
type Project @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
}
type Community @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
hasContentPieces: [ContentPiece!]!
@relationship(type: "COMMUNITY_CONTENTPIECE_HASCONTENTPIECES", direction: OUT)
hasAssociatedProjects: [Project!]!
@relationship(type: "COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS", direction: OUT)
}
extend type Community {
"""
Used on Community Landing Page
"""
hasFeedItems(limit: Int = 10, pageIndex: Int = 0): [FeedItem!]!
@cypher(
statement: """
Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit
"""
)
}
union FeedItem = ContentPiece | Project
```
**To Reproduce**
Use this query:
``` graphql
query {
communities {
id
hasFeedItems {
... on ContentPiece {
id
}
... on Project {
id
}
}
}
}
```
it produces the following error:
```
Neo4jError: Invalid input 't' (line 2, column 404 (offset: 440))
"RETURN this { .id, hasFeedItems: apoc.coll.flatten([this_hasFeedItems IN apoc.cypher.runFirstColumnMany("Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit", {this: this, auth: $auth, limit: $this_hasFeedItems_limit, pageIndex: $this_hasFeedItems_pageIndex}) WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`) OR (this_hasFeedItems:`Project`ANDthis_hasFeedItems:`UNIVERSAL`) | [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`ContentPiece` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "ContentPiece", .id } ] + [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`Project` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "Project", .id } ] ]) } as this"
```
Generated cypher:
```
Cypher:
MATCH (this:`Community`:`UNIVERSAL`)
RETURN this { .id, hasFeedItems: apoc.coll.flatten([this_hasFeedItems IN apoc.cypher.runFirstColumnMany("Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit", {this: this, auth: $auth, limit: $this_hasFeedItems_limit, pageIndex: $this_hasFeedItems_pageIndex}) WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`) OR (this_hasFeedItems:`Project`ANDthis_hasFeedItems:`UNIVERSAL`) | [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`ContentPiece` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "ContentPiece", .id } ] + [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`Project` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "Project", .id } ] ]) } as this
Params:
{
"this_hasFeedItems_limit": {
"low": 10,
"high": 0
},
"this_hasFeedItems_pageIndex": {
"low": 0,
"high": 0
},
"auth": {
"isAuthenticated": false,
"roles": []
}
}
```
**Expected behavior**
No error.
|
1.0
|
Cypher generation syntax error - **Describe the bug**
Neo4j EE 4.4.5
Neo4j/GraphQL 3.4.0
Customer is using `@cypher` directive in a type. The generated cypher statement when executed produces a syntax error. Visibly you can see that words are concatenated together, such as:
```
WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`)
```
You can see there is no space between `AND` and `this`.
Customer upgraded neo4j/graphql to 3.6.1 and problem still exists.
**Type definitions**
```graphql
type ContentPiece @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
}
type Project @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
}
type Community @node(additionalLabels: ["UNIVERSAL"]) {
uid: String! @unique
id: Int
hasContentPieces: [ContentPiece!]!
@relationship(type: "COMMUNITY_CONTENTPIECE_HASCONTENTPIECES", direction: OUT)
hasAssociatedProjects: [Project!]!
@relationship(type: "COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS", direction: OUT)
}
extend type Community {
"""
Used on Community Landing Page
"""
hasFeedItems(limit: Int = 10, pageIndex: Int = 0): [FeedItem!]!
@cypher(
statement: """
Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit
"""
)
}
union FeedItem = ContentPiece | Project
```
**To Reproduce**
Use this query:
``` graphql
query {
communities {
id
hasFeedItems {
... on ContentPiece {
id
}
... on Project {
id
}
}
}
}
```
it produces the following error:
```
Neo4jError: Invalid input 't' (line 2, column 404 (offset: 440))
"RETURN this { .id, hasFeedItems: apoc.coll.flatten([this_hasFeedItems IN apoc.cypher.runFirstColumnMany("Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit", {this: this, auth: $auth, limit: $this_hasFeedItems_limit, pageIndex: $this_hasFeedItems_pageIndex}) WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`) OR (this_hasFeedItems:`Project`ANDthis_hasFeedItems:`UNIVERSAL`) | [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`ContentPiece` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "ContentPiece", .id } ] + [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`Project` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "Project", .id } ] ]) } as this"
```
Generated cypher:
```
Cypher:
MATCH (this:`Community`:`UNIVERSAL`)
RETURN this { .id, hasFeedItems: apoc.coll.flatten([this_hasFeedItems IN apoc.cypher.runFirstColumnMany("Match(this)-[:COMMUNITY_CONTENTPIECE_HASCONTENTPIECES|:COMMUNITY_PROJECT_HASASSOCIATEDPROJECTS]-(pag) return pag SKIP ($limit * $pageIndex) LIMIT $limit", {this: this, auth: $auth, limit: $this_hasFeedItems_limit, pageIndex: $this_hasFeedItems_pageIndex}) WHERE (this_hasFeedItems:`ContentPiece`ANDthis_hasFeedItems:`UNIVERSAL`) OR (this_hasFeedItems:`Project`ANDthis_hasFeedItems:`UNIVERSAL`) | [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`ContentPiece` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "ContentPiece", .id } ] + [ this_hasFeedItems IN [this_hasFeedItems] WHERE (this_hasFeedItems:`Project` AND this_hasFeedItems:`UNIVERSAL`) | this_hasFeedItems { __resolveType: "Project", .id } ] ]) } as this
Params:
{
"this_hasFeedItems_limit": {
"low": 10,
"high": 0
},
"this_hasFeedItems_pageIndex": {
"low": 0,
"high": 0
},
"auth": {
"isAuthenticated": false,
"roles": []
}
}
```
**Expected behavior**
No error.
|
priority
|
cypher generation syntax error describe the bug ee graphql customer is using cypher directive in a type the generated cypher statement when executed produces a syntax error visibly you can see that words are concatenated together such as where this hasfeeditems contentpiece andthis hasfeeditems universal you can see there is no space between and and this customer upgraded graphql to and problem still exists type definitions graphql type contentpiece node additionallabels uid string unique id int type project node additionallabels uid string unique id int type community node additionallabels uid string unique id int hascontentpieces relationship type community contentpiece hascontentpieces direction out hasassociatedprojects relationship type community project hasassociatedprojects direction out extend type community used on community landing page hasfeeditems limit int pageindex int cypher statement match this pag return pag skip limit pageindex limit limit union feeditem contentpiece project to reproduce use this query graphql query communities id hasfeeditems on contentpiece id on project id it produces the following error invalid input t line column offset return this id hasfeeditems apoc coll flatten pag return pag skip limit pageindex limit limit this this auth auth limit this hasfeeditems limit pageindex this hasfeeditems pageindex where this hasfeeditems contentpiece andthis hasfeeditems universal or this hasfeeditems project andthis hasfeeditems universal where this hasfeeditems contentpiece and this hasfeeditems universal this hasfeeditems resolvetype contentpiece id where this hasfeeditems project and this hasfeeditems universal this hasfeeditems resolvetype project id as this generated cypher cypher match this community universal return this id hasfeeditems apoc coll flatten pag return pag skip limit pageindex limit limit this this auth auth limit this hasfeeditems limit pageindex this hasfeeditems pageindex where this hasfeeditems contentpiece andthis hasfeeditems universal or this hasfeeditems project andthis hasfeeditems universal where this hasfeeditems contentpiece and this hasfeeditems universal this hasfeeditems resolvetype contentpiece id where this hasfeeditems project and this hasfeeditems universal this hasfeeditems resolvetype project id as this params this hasfeeditems limit low high this hasfeeditems pageindex low high auth isauthenticated false roles expected behavior no error
| 1
|
442,808
| 12,750,875,686
|
IssuesEvent
|
2020-06-27 07:28:15
|
dirkwhoffmann/vAmiga
|
https://api.github.com/repos/dirkwhoffmann/vAmiga
|
closed
|
SWAP.W fails
|
Priority-High bug
|
I've started to create new tests with the latest version of cputester.
Here is a one that fails: [SWAP.W.adf.zip](https://github.com/dirkwhoffmann/vAmiga/files/4703011/SWAP.W.adf.zip)
UAE: 👍
<img width="721" alt="Bildschirmfoto 2020-05-29 um 18 30 55" src="https://user-images.githubusercontent.com/12561945/83283154-0874bf00-a1db-11ea-95ce-0a256947e7fa.png">
vAmiga: 😬
<img width="968" alt="Bildschirmfoto 2020-05-29 um 18 31 05" src="https://user-images.githubusercontent.com/12561945/83283190-1591ae00-a1db-11ea-8e90-fba5e11af699.png">
Settings for this test:
```
cpu=68000-68010
mode=nop,ext,swap
feature_interrupts=1
feature_sr_mask=0xA000
```
Update: The simpler NOP test also fails:
[NOP.adf.zip](https://github.com/dirkwhoffmann/vAmiga/files/4703069/NOP.adf.zip)
<img width="968" alt="Bildschirmfoto 2020-05-29 um 19 20 34" src="https://user-images.githubusercontent.com/12561945/83287214-90f65e00-a1e1-11ea-8457-e85cc26b0018.png">
Conclusion: There is an issue if an IRQ hits when trace mode is on.
|
1.0
|
SWAP.W fails - I've started to create new tests with the latest version of cputester.
Here is a one that fails: [SWAP.W.adf.zip](https://github.com/dirkwhoffmann/vAmiga/files/4703011/SWAP.W.adf.zip)
UAE: 👍
<img width="721" alt="Bildschirmfoto 2020-05-29 um 18 30 55" src="https://user-images.githubusercontent.com/12561945/83283154-0874bf00-a1db-11ea-95ce-0a256947e7fa.png">
vAmiga: 😬
<img width="968" alt="Bildschirmfoto 2020-05-29 um 18 31 05" src="https://user-images.githubusercontent.com/12561945/83283190-1591ae00-a1db-11ea-8e90-fba5e11af699.png">
Settings for this test:
```
cpu=68000-68010
mode=nop,ext,swap
feature_interrupts=1
feature_sr_mask=0xA000
```
Update: The simpler NOP test also fails:
[NOP.adf.zip](https://github.com/dirkwhoffmann/vAmiga/files/4703069/NOP.adf.zip)
<img width="968" alt="Bildschirmfoto 2020-05-29 um 19 20 34" src="https://user-images.githubusercontent.com/12561945/83287214-90f65e00-a1e1-11ea-8457-e85cc26b0018.png">
Conclusion: There is an issue if an IRQ hits when trace mode is on.
|
priority
|
swap w fails i ve started to create new tests with the latest version of cputester here is a one that fails uae 👍 img width alt bildschirmfoto um src vamiga 😬 img width alt bildschirmfoto um src settings for this test cpu mode nop ext swap feature interrupts feature sr mask update the simpler nop test also fails img width alt bildschirmfoto um src conclusion there is an issue if an irq hits when trace mode is on
| 1
|
442,101
| 12,737,542,575
|
IssuesEvent
|
2020-06-25 18:58:00
|
chocolatey/chocolatey-licensed-issues
|
https://api.github.com/repos/chocolatey/chocolatey-licensed-issues
|
closed
|
Central Management - Web - Do not recreate website w/bindings on upgrade
|
3 - Done Bug CentralManagement Edition - Business Priority_HIGH
|
### What You Are Seeing?
If you've updated website bindings to include SSL/TLS bindings, on upgrade those are wiped out.
### What is Expected?
Those should be left untouched.
### How Did You Get This To Happen? (Steps to Reproduce)
* Install CCM Web
* Add an additional binding to the website in IIS
* Upgrade CCM Web to a newer version
* Note that any adjustments you made are wiped out
## References
* [Internal Ticket](https://gitlab.com/chocolatey/choco-licensed-management-ui/-/issues/419)
* [Zendesk Ticket](https://chocolatey.zendesk.com/agent/tickets/7131)
|
1.0
|
Central Management - Web - Do not recreate website w/bindings on upgrade - ### What You Are Seeing?
If you've updated website bindings to include SSL/TLS bindings, on upgrade those are wiped out.
### What is Expected?
Those should be left untouched.
### How Did You Get This To Happen? (Steps to Reproduce)
* Install CCM Web
* Add an additional binding to the website in IIS
* Upgrade CCM Web to a newer version
* Note that any adjustments you made are wiped out
## References
* [Internal Ticket](https://gitlab.com/chocolatey/choco-licensed-management-ui/-/issues/419)
* [Zendesk Ticket](https://chocolatey.zendesk.com/agent/tickets/7131)
|
priority
|
central management web do not recreate website w bindings on upgrade what you are seeing if you ve updated website bindings to include ssl tls bindings on upgrade those are wiped out what is expected those should be left untouched how did you get this to happen steps to reproduce install ccm web add an additional binding to the website in iis upgrade ccm web to a newer version note that any adjustments you made are wiped out references
| 1
|
741,035
| 25,777,759,912
|
IssuesEvent
|
2022-12-09 13:26:44
|
bounswe/bounswe2022group4
|
https://api.github.com/repos/bounswe/bounswe2022group4
|
closed
|
Frontend: Implement Comment and Comment Box Components
|
Category - To Do Priority - High whom: individual Difficulty - Hard Language - React.js Team - Frontend
|
I need to implement a comment that allows user to view single comment and comment box components that allow users to view all comments under a post.
Steps:
1) Make research on expandable components
2) Implement a Comment Component
3) Implement CommentBox Component by rendering all related comments about the post
4) Connect CommentBox Component with related post
5) Implement a ui that allows show comments button in the post expanding the post and rendering the comments
Deadline: 03.12.2022 23.59
Reviewer: @BeratDamar
|
1.0
|
Frontend: Implement Comment and Comment Box Components - I need to implement a comment that allows user to view single comment and comment box components that allow users to view all comments under a post.
Steps:
1) Make research on expandable components
2) Implement a Comment Component
3) Implement CommentBox Component by rendering all related comments about the post
4) Connect CommentBox Component with related post
5) Implement a ui that allows show comments button in the post expanding the post and rendering the comments
Deadline: 03.12.2022 23.59
Reviewer: @BeratDamar
|
priority
|
frontend implement comment and comment box components i need to implement a comment that allows user to view single comment and comment box components that allow users to view all comments under a post steps make research on expandable components implement a comment component implement commentbox component by rendering all related comments about the post connect commentbox component with related post implement a ui that allows show comments button in the post expanding the post and rendering the comments deadline reviewer beratdamar
| 1
|
618,624
| 19,477,092,902
|
IssuesEvent
|
2021-12-24 14:54:30
|
bounswe/2021SpringGroup10
|
https://api.github.com/repos/bounswe/2021SpringGroup10
|
opened
|
Backend: Change the Privacy of the Community
|
Type: Enhancement Priority: High Coding: Backend Database
|
An admin should be able to change the privacy value of the community. If the previous privacy value was true, all the subscription requester registered users should be assigned as subscribers because of the change of the privacy.
|
1.0
|
Backend: Change the Privacy of the Community - An admin should be able to change the privacy value of the community. If the previous privacy value was true, all the subscription requester registered users should be assigned as subscribers because of the change of the privacy.
|
priority
|
backend change the privacy of the community an admin should be able to change the privacy value of the community if the previous privacy value was true all the subscription requester registered users should be assigned as subscribers because of the change of the privacy
| 1
|
793,226
| 27,987,340,470
|
IssuesEvent
|
2023-03-26 20:48:16
|
FTC7393/FtcRobotController
|
https://api.github.com/repos/FTC7393/FtcRobotController
|
closed
|
Coordinate Fetcher and Lever movement for vertical motion
|
enhancement Auto Extremely high priority
|
For some portion of the Lever travel, it would be very helpful to coordinate the motion of the Fetcher so that the Grabber travels in a vertical line. This requires a few steps:
1. Predict the location of the Lever based on the series of commands we give it. This requires measuring the speed as it goes up and down on each side of the "up" position.
2. Compute the "fetcher offset" using the lever length and angle cosine.
3. Add the fetcher offset to the commanded fetcher position (will be a negative number) and send this as the actual fetcher command.
|
1.0
|
Coordinate Fetcher and Lever movement for vertical motion - For some portion of the Lever travel, it would be very helpful to coordinate the motion of the Fetcher so that the Grabber travels in a vertical line. This requires a few steps:
1. Predict the location of the Lever based on the series of commands we give it. This requires measuring the speed as it goes up and down on each side of the "up" position.
2. Compute the "fetcher offset" using the lever length and angle cosine.
3. Add the fetcher offset to the commanded fetcher position (will be a negative number) and send this as the actual fetcher command.
|
priority
|
coordinate fetcher and lever movement for vertical motion for some portion of the lever travel it would be very helpful to coordinate the motion of the fetcher so that the grabber travels in a vertical line this requires a few steps predict the location of the lever based on the series of commands we give it this requires measuring the speed as it goes up and down on each side of the up position compute the fetcher offset using the lever length and angle cosine add the fetcher offset to the commanded fetcher position will be a negative number and send this as the actual fetcher command
| 1
|
126,510
| 4,996,738,769
|
IssuesEvent
|
2016-12-09 14:50:09
|
CommonCreative/vbn-redesign
|
https://api.github.com/repos/CommonCreative/vbn-redesign
|
closed
|
Devcards remains mounted after being initialised
|
High Priority
|
Figure out how to make devcards only show on devcards.html
|
1.0
|
Devcards remains mounted after being initialised - Figure out how to make devcards only show on devcards.html
|
priority
|
devcards remains mounted after being initialised figure out how to make devcards only show on devcards html
| 1
|
273,618
| 8,550,867,937
|
IssuesEvent
|
2018-11-07 16:32:31
|
metasfresh/metasfresh-webui-api
|
https://api.github.com/repos/metasfresh/metasfresh-webui-api
|
closed
|
Label Element filters not working
|
branch:master priority:high type:bug
|
### Is this a bug or feature request?
* Bug
### What is the current behavior?
* If I filter by a label element ( e.g. by Attribute in Business Partner window) I get an error and then loading screen until I refresh it.

<details/>
<summary> Stack trace </summary>
de.metas.ui.web.window.datatypes.DataTypes$ValueConversionException: no conversion rule defined to convert the value to target type
Additional parameters:
fieldName: Labels_548674
value:
valueClass: class java.lang.String
targetType: class de.metas.ui.web.window.datatypes.LookupValuesList
widgetType: Labels
lookupDataSource: LookupDataSourceAdapter{de.metas.ui.web.window.model.lookup.LabelsLookup@163390f7}
at de.metas.ui.web.window.datatypes.DataTypes.convertToLookupValuesList(DataTypes.java:551)
at de.metas.ui.web.window.datatypes.DataTypes.convertToValueClass(DataTypes.java:175)
at de.metas.ui.web.document.filter.DocumentFilterParamDescriptor.convertValueToFieldType(DocumentFilterParamDescriptor.java:135)
at de.metas.ui.web.document.filter.DocumentFilterParamDescriptor.convertValueFromJson(DocumentFilterParamDescriptor.java:130)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrapUsingDescriptor(JSONDocumentFilter.java:118)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrap(JSONDocumentFilter.java:75)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.lambda$0(JSONDocumentFilter.java:57)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Collections$2.tryAdvance(Collections.java:4717)
at java.util.Collections$2.forEachRemaining(Collections.java:4725)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrapList(JSONDocumentFilter.java:59)
at de.metas.ui.web.view.DefaultView$Builder.setFiltersFromJSON(DefaultView.java:803)
at de.metas.ui.web.view.SqlViewFactory.createView(SqlViewFactory.java:301)
at de.metas.ui.web.view.IViewFactory.filterView(IViewFactory.java:60)
at de.metas.ui.web.view.ViewsRepository.filterView(ViewsRepository.java:284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy119.filterView(Unknown Source)
at de.metas.ui.web.view.ViewRestController.filterView(ViewRestController.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:661)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.web.filter.ApplicationContextHeaderFilter.doFilterInternal(ApplicationContextHeaderFilter.java:55)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.WebConfig$1.doFilter(WebConfig.java:82)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.ServletLoggingFilter.doFilter(ServletLoggingFilter.java:89)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.CORSFilter.doFilter(CORSFilter.java:79)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:110)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:105)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:167)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.autoconfigure.MetricsFilter.doFilterInternal(MetricsFilter.java:106)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:799)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1455)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
</details>
### Which are the steps to reproduce?
* Go to the window Business Partner, select the Attribute filter, enter an attribute from the list, press OK.
### What is the expected or desired behavior?
* There should be no error and all the entries with that attribute shall be retrieved.
|
1.0
|
Label Element filters not working - ### Is this a bug or feature request?
* Bug
### What is the current behavior?
* If I filter by a label element ( e.g. by Attribute in Business Partner window) I get an error and then loading screen until I refresh it.

<details/>
<summary> Stack trace </summary>
de.metas.ui.web.window.datatypes.DataTypes$ValueConversionException: no conversion rule defined to convert the value to target type
Additional parameters:
fieldName: Labels_548674
value:
valueClass: class java.lang.String
targetType: class de.metas.ui.web.window.datatypes.LookupValuesList
widgetType: Labels
lookupDataSource: LookupDataSourceAdapter{de.metas.ui.web.window.model.lookup.LabelsLookup@163390f7}
at de.metas.ui.web.window.datatypes.DataTypes.convertToLookupValuesList(DataTypes.java:551)
at de.metas.ui.web.window.datatypes.DataTypes.convertToValueClass(DataTypes.java:175)
at de.metas.ui.web.document.filter.DocumentFilterParamDescriptor.convertValueToFieldType(DocumentFilterParamDescriptor.java:135)
at de.metas.ui.web.document.filter.DocumentFilterParamDescriptor.convertValueFromJson(DocumentFilterParamDescriptor.java:130)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrapUsingDescriptor(JSONDocumentFilter.java:118)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrap(JSONDocumentFilter.java:75)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.lambda$0(JSONDocumentFilter.java:57)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Collections$2.tryAdvance(Collections.java:4717)
at java.util.Collections$2.forEachRemaining(Collections.java:4725)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at de.metas.ui.web.document.filter.json.JSONDocumentFilter.unwrapList(JSONDocumentFilter.java:59)
at de.metas.ui.web.view.DefaultView$Builder.setFiltersFromJSON(DefaultView.java:803)
at de.metas.ui.web.view.SqlViewFactory.createView(SqlViewFactory.java:301)
at de.metas.ui.web.view.IViewFactory.filterView(IViewFactory.java:60)
at de.metas.ui.web.view.ViewsRepository.filterView(ViewsRepository.java:284)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy119.filterView(Unknown Source)
at de.metas.ui.web.view.ViewRestController.filterView(ViewRestController.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:661)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.web.filter.ApplicationContextHeaderFilter.doFilterInternal(ApplicationContextHeaderFilter.java:55)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.WebConfig$1.doFilter(WebConfig.java:82)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.ServletLoggingFilter.doFilter(ServletLoggingFilter.java:89)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at de.metas.ui.web.config.CORSFilter.doFilter(CORSFilter.java:79)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:110)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:105)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:167)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.autoconfigure.MetricsFilter.doFilterInternal(MetricsFilter.java:106)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:799)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1455)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
</details>
### Which are the steps to reproduce?
* Go to the window Business Partner, select the Attribute filter, enter an attribute from the list, press OK.
### What is the expected or desired behavior?
* There should be no error and all the entries with that attribute shall be retrieved.
|
priority
|
label element filters not working is this a bug or feature request bug what is the current behavior if i filter by a label element e g by attribute in business partner window i get an error and then loading screen until i refresh it stack trace de metas ui web window datatypes datatypes valueconversionexception no conversion rule defined to convert the value to target type additional parameters fieldname labels value valueclass class java lang string targettype class de metas ui web window datatypes lookupvalueslist widgettype labels lookupdatasource lookupdatasourceadapter de metas ui web window model lookup labelslookup at de metas ui web window datatypes datatypes converttolookupvalueslist datatypes java at de metas ui web window datatypes datatypes converttovalueclass datatypes java at de metas ui web document filter documentfilterparamdescriptor convertvaluetofieldtype documentfilterparamdescriptor java at de metas ui web document filter documentfilterparamdescriptor convertvaluefromjson documentfilterparamdescriptor java at de metas ui web document filter json jsondocumentfilter unwrapusingdescriptor jsondocumentfilter java at de metas ui web document filter json jsondocumentfilter unwrap jsondocumentfilter java at de metas ui web document filter json jsondocumentfilter lambda jsondocumentfilter java at java util stream referencepipeline accept referencepipeline java at java util collections tryadvance collections java at java util collections foreachremaining collections java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream reduceops reduceop evaluatesequential reduceops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline collect referencepipeline java at de metas ui web document filter json jsondocumentfilter unwraplist jsondocumentfilter java at de metas ui web view defaultview builder setfiltersfromjson defaultview java at de metas ui web view sqlviewfactory createview sqlviewfactory java at de metas ui web view iviewfactory filterview iviewfactory java at de metas ui web view viewsrepository filterview viewsrepository java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework aop support aoputils invokejoinpointusingreflection aoputils java at org springframework aop framework reflectivemethodinvocation invokejoinpoint reflectivemethodinvocation java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework dao support persistenceexceptiontranslationinterceptor invoke persistenceexceptiontranslationinterceptor java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework jdkdynamicaopproxy invoke jdkdynamicaopproxy java at com sun proxy filterview unknown source at de metas ui web view viewrestcontroller filterview viewrestcontroller java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework web method support invocablehandlermethod doinvoke invocablehandlermethod java at org springframework web method support invocablehandlermethod invokeforrequest invocablehandlermethod java at org springframework web servlet mvc method annotation servletinvocablehandlermethod invokeandhandle servletinvocablehandlermethod java at org springframework web servlet mvc method annotation requestmappinghandleradapter invokehandlermethod requestmappinghandleradapter java at org springframework web servlet mvc method annotation requestmappinghandleradapter handleinternal requestmappinghandleradapter java at org springframework web servlet mvc method abstracthandlermethodadapter handle abstracthandlermethodadapter java at org springframework web servlet dispatcherservlet dodispatch dispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet dopost frameworkservlet java at javax servlet http httpservlet service httpservlet java at org springframework web servlet frameworkservlet service frameworkservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework boot web filter applicationcontextheaderfilter dofilterinternal applicationcontextheaderfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at de metas ui web config webconfig dofilter webconfig java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at de metas ui web config servletloggingfilter dofilter servletloggingfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at de metas ui web config corsfilter dofilter corsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework boot actuate trace webrequesttracefilter dofilterinternal webrequesttracefilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter requestcontextfilter dofilterinternal requestcontextfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter httpputformcontentfilter dofilterinternal httpputformcontentfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter hiddenhttpmethodfilter dofilterinternal hiddenhttpmethodfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework session web http sessionrepositoryfilter dofilterinternal sessionrepositoryfilter java at org springframework session web http onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter characterencodingfilter dofilterinternal characterencodingfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework boot actuate autoconfigure metricsfilter dofilterinternal metricsfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java which are the steps to reproduce go to the window business partner select the attribute filter enter an attribute from the list press ok what is the expected or desired behavior there should be no error and all the entries with that attribute shall be retrieved
| 1
|
475,861
| 13,727,079,126
|
IssuesEvent
|
2020-10-04 04:07:22
|
AY2021S1-TIC4001-2/tp
|
https://api.github.com/repos/AY2021S1-TIC4001-2/tp
|
opened
|
As a user I can list all income categories
|
priority.High type.Story
|
... so that I can view the income categories in the app.
|
1.0
|
As a user I can list all income categories - ... so that I can view the income categories in the app.
|
priority
|
as a user i can list all income categories so that i can view the income categories in the app
| 1
|
556,261
| 16,480,042,417
|
IssuesEvent
|
2021-05-24 10:25:57
|
HEPData/hepdata
|
https://api.github.com/repos/HEPData/hepdata
|
closed
|
model: remove participants from HEPSubmission object
|
complexity: medium priority: high type: bug
|
There is a `SubmissionParticipant` object defined in `hepdata/modules/permissions/models.py`. The `HEPSubmission` object in `hepdata/modules/submission/models.py` also defines a `participants` column which links to the `SubmissionParticipant` object:
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/submission/models.py#L32-L38
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/submission/models.py#L90-L92
The `SubmissionParticipant` is independent of a version, but `HEPSubmission` depends on a version and the `participants` are not copied when creating a new version:
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/records/api.py#L355-L359
The code queries both the `SubmissionParticipant` table directly and the `participants` column of the `HEPSubmission` table (for example, for the Dashboard display) leading to some inconsistencies for versions of 2 or greater. It would be simpler to only use the `SubmissionParticipant` table throughout the code and remove the `participants` column of the `HEPSubmission` table. After the code is modified to remove use of the `participants` column, perhaps the DB cleanup could be done at the same time as #139 (?).
|
1.0
|
model: remove participants from HEPSubmission object - There is a `SubmissionParticipant` object defined in `hepdata/modules/permissions/models.py`. The `HEPSubmission` object in `hepdata/modules/submission/models.py` also defines a `participants` column which links to the `SubmissionParticipant` object:
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/submission/models.py#L32-L38
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/submission/models.py#L90-L92
The `SubmissionParticipant` is independent of a version, but `HEPSubmission` depends on a version and the `participants` are not copied when creating a new version:
https://github.com/HEPData/hepdata/blob/2f6526dd2b33bf0f5700bd206dcfb97d2bc81695/hepdata/modules/records/api.py#L355-L359
The code queries both the `SubmissionParticipant` table directly and the `participants` column of the `HEPSubmission` table (for example, for the Dashboard display) leading to some inconsistencies for versions of 2 or greater. It would be simpler to only use the `SubmissionParticipant` table throughout the code and remove the `participants` column of the `HEPSubmission` table. After the code is modified to remove use of the `participants` column, perhaps the DB cleanup could be done at the same time as #139 (?).
|
priority
|
model remove participants from hepsubmission object there is a submissionparticipant object defined in hepdata modules permissions models py the hepsubmission object in hepdata modules submission models py also defines a participants column which links to the submissionparticipant object the submissionparticipant is independent of a version but hepsubmission depends on a version and the participants are not copied when creating a new version the code queries both the submissionparticipant table directly and the participants column of the hepsubmission table for example for the dashboard display leading to some inconsistencies for versions of or greater it would be simpler to only use the submissionparticipant table throughout the code and remove the participants column of the hepsubmission table after the code is modified to remove use of the participants column perhaps the db cleanup could be done at the same time as
| 1
|
298,842
| 9,201,768,703
|
IssuesEvent
|
2019-03-07 20:31:57
|
blackbaud/skyux-datetime
|
https://api.github.com/repos/blackbaud/skyux-datetime
|
closed
|
Create a custom `SkyDatePipe`
|
Priority: High Type: Enhancement
|
Create a custom pipe to handle localized dates asynchronously (Angular's `DatePipe` only supports synchronous locale setup).
The pipe should also work flawlessly with SKY UX's native `SkyAppLocaleProvider` service.
|
1.0
|
Create a custom `SkyDatePipe` - Create a custom pipe to handle localized dates asynchronously (Angular's `DatePipe` only supports synchronous locale setup).
The pipe should also work flawlessly with SKY UX's native `SkyAppLocaleProvider` service.
|
priority
|
create a custom skydatepipe create a custom pipe to handle localized dates asynchronously angular s datepipe only supports synchronous locale setup the pipe should also work flawlessly with sky ux s native skyapplocaleprovider service
| 1
|
372,688
| 11,019,223,428
|
IssuesEvent
|
2019-12-05 12:12:22
|
itachi1706/SingBuses
|
https://api.github.com/repos/itachi1706/SingBuses
|
closed
|
Remove Pebble support
|
android enhancement pebble priority:high
|
Let's face it. Most of the Pebble stuff are basically gone or removed for a while now. It makes no logical sense to retain it in the app especially because Q will remove the storage permissions necessary to store the pebble software in SD so that the Pebble app can read it. Hence let's remove it. Android Wear support is still planned as a replacement. Maybe.
Goodbye Pebble. You were good while you lasted 😭
**TODO**
- [ ] Remove Pebble code from Android app (we can move it to somewhere I guess in a zip file)
- [ ] Remove Pebble C code from repo (store in zip file)
- [ ] Move those zip file to a /pebble folder
- [ ] Remove/Hide the Companion App Install button on the main menu
- [ ] hide the Companion App selection options in the settings menu
- [ ] Test to make sure nothing breaks further
_Sent from my LG G7 ThinQ using [FastHub](https://play.google.com/store/apps/details?id=com.fastaccess.github)_
|
1.0
|
Remove Pebble support - Let's face it. Most of the Pebble stuff are basically gone or removed for a while now. It makes no logical sense to retain it in the app especially because Q will remove the storage permissions necessary to store the pebble software in SD so that the Pebble app can read it. Hence let's remove it. Android Wear support is still planned as a replacement. Maybe.
Goodbye Pebble. You were good while you lasted 😭
**TODO**
- [ ] Remove Pebble code from Android app (we can move it to somewhere I guess in a zip file)
- [ ] Remove Pebble C code from repo (store in zip file)
- [ ] Move those zip file to a /pebble folder
- [ ] Remove/Hide the Companion App Install button on the main menu
- [ ] hide the Companion App selection options in the settings menu
- [ ] Test to make sure nothing breaks further
_Sent from my LG G7 ThinQ using [FastHub](https://play.google.com/store/apps/details?id=com.fastaccess.github)_
|
priority
|
remove pebble support let s face it most of the pebble stuff are basically gone or removed for a while now it makes no logical sense to retain it in the app especially because q will remove the storage permissions necessary to store the pebble software in sd so that the pebble app can read it hence let s remove it android wear support is still planned as a replacement maybe goodbye pebble you were good while you lasted 😭 todo remove pebble code from android app we can move it to somewhere i guess in a zip file remove pebble c code from repo store in zip file move those zip file to a pebble folder remove hide the companion app install button on the main menu hide the companion app selection options in the settings menu test to make sure nothing breaks further sent from my lg thinq using
| 1
|
742,254
| 25,845,455,016
|
IssuesEvent
|
2022-12-13 05:58:14
|
ChaosInitiative/Portal-2-Community-Edition
|
https://api.github.com/repos/ChaosInitiative/Portal-2-Community-Edition
|
closed
|
Coop community map partner soundscripts missing
|
Type: bug Focus: workshop/ugc Component: audio Compat: Portal 2 Priority 2: High Focus: Co-Op
|
Custom music and certain assets don't get loaded properly for a non-host player.
|
1.0
|
Coop community map partner soundscripts missing - Custom music and certain assets don't get loaded properly for a non-host player.
|
priority
|
coop community map partner soundscripts missing custom music and certain assets don t get loaded properly for a non host player
| 1
|
342,980
| 10,324,144,888
|
IssuesEvent
|
2019-09-01 06:13:30
|
OpenSRP/opensrp-client-chw
|
https://api.github.com/repos/OpenSRP/opensrp-client-chw
|
closed
|
Change "Communauté" to "Communautés" on login page for Togo, Guinea, DRC, and Chad apps
|
Chad DRC Guinea Togo enhancement high priority
|

|
1.0
|
Change "Communauté" to "Communautés" on login page for Togo, Guinea, DRC, and Chad apps - 
|
priority
|
change communauté to communautés on login page for togo guinea drc and chad apps
| 1
|
704,723
| 24,207,315,686
|
IssuesEvent
|
2022-09-25 12:20:11
|
starwolves/space
|
https://api.github.com/repos/starwolves/space
|
closed
|
Add a stun stick
|
planned high priority
|
Add a new combat item that stuns players. It should also get a new physics pipeline query because unlike fist melee attacks stun stick attacks will swing and so it deals damage in an arc pattern and not in a stab pattern.
|
1.0
|
Add a stun stick - Add a new combat item that stuns players. It should also get a new physics pipeline query because unlike fist melee attacks stun stick attacks will swing and so it deals damage in an arc pattern and not in a stab pattern.
|
priority
|
add a stun stick add a new combat item that stuns players it should also get a new physics pipeline query because unlike fist melee attacks stun stick attacks will swing and so it deals damage in an arc pattern and not in a stab pattern
| 1
|
70,252
| 3,321,613,811
|
IssuesEvent
|
2015-11-09 09:59:41
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
user related database issue with duplicating rows
|
bug Priority-High
|
It seems that sometimes duplicates appear in the database for **validation_tokens** table and for **user_extra** table. Check below:
```
select * from validation_tokens where user_id='77ba0d4e-bc8d-4ae5-b0e2-848feafadadf';
id | user_id | token | valid
--------------------------------------+--------------------------------------+----------------------------------+-------
4a3cbc86-b90a-47d4-8ea6-b845b987d676 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | TOKEN | f
6ac999b9-1bae-4cf0-93ec-817546f166b2 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | TOKEN | f
(2 rows)
```
```
select * from user_extra where user_id='77ba0d4e-bc8d-4ae5-b0e2-848feafadadf' order BY key;
id | user_id | key | value
--------------------------------------+--------------------------------------+--------------------------------+-------
547a5361-7a67-4d80-a784-708a16f35f0c | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_details | False
18e44af9-45d2-422c-930c-54b9b4d26866 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_details | False
d3abbc84-4f0f-41d7-938e-0dc183c3006f | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_first_login | False
99ce53ef-777a-48e0-af69-4f977ed1607f | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_first_login | False
05771dff-174f-4ef0-a898-2e5a2e9c4680 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_follows | False
1027b781-ad45-49b4-8f15-6a89aaa91650 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_follows | False
3de05827-617a-4874-9891-31392d2d9cff | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_friends | False
a675f153-7aaf-47fb-9fe7-89755e9b7eb2 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_friends | False
f5ea82a1-fc55-47ab-879d-e9f85f746466 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_org | False
5ec22342-9618-41fb-9fb9-f52ec27575d5 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_org | False
52b6481b-8266-4a72-82e4-440a50e656c9 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_registered | True
cf59714b-1ade-4e39-a1c7-d50a306eec41 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_registered | True
902d4459-df60-4376-9300-ca1ea4965127 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_validated | False
40446cd5-0528-43a6-925c-ff5efb7d385b | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_validated | False
```
Apart from checking how this duplicates managed to appear we should also add unique constraints to sqlalchemy/ database so that this can't happen again.
|
1.0
|
user related database issue with duplicating rows - It seems that sometimes duplicates appear in the database for **validation_tokens** table and for **user_extra** table. Check below:
```
select * from validation_tokens where user_id='77ba0d4e-bc8d-4ae5-b0e2-848feafadadf';
id | user_id | token | valid
--------------------------------------+--------------------------------------+----------------------------------+-------
4a3cbc86-b90a-47d4-8ea6-b845b987d676 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | TOKEN | f
6ac999b9-1bae-4cf0-93ec-817546f166b2 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | TOKEN | f
(2 rows)
```
```
select * from user_extra where user_id='77ba0d4e-bc8d-4ae5-b0e2-848feafadadf' order BY key;
id | user_id | key | value
--------------------------------------+--------------------------------------+--------------------------------+-------
547a5361-7a67-4d80-a784-708a16f35f0c | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_details | False
18e44af9-45d2-422c-930c-54b9b4d26866 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_details | False
d3abbc84-4f0f-41d7-938e-0dc183c3006f | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_first_login | False
99ce53ef-777a-48e0-af69-4f977ed1607f | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_first_login | False
05771dff-174f-4ef0-a898-2e5a2e9c4680 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_follows | False
1027b781-ad45-49b4-8f15-6a89aaa91650 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_follows | False
3de05827-617a-4874-9891-31392d2d9cff | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_friends | False
a675f153-7aaf-47fb-9fe7-89755e9b7eb2 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_friends | False
f5ea82a1-fc55-47ab-879d-e9f85f746466 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_org | False
5ec22342-9618-41fb-9fb9-f52ec27575d5 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_org | False
52b6481b-8266-4a72-82e4-440a50e656c9 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_registered | True
cf59714b-1ade-4e39-a1c7-d50a306eec41 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_registered | True
902d4459-df60-4376-9300-ca1ea4965127 | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_validated | False
40446cd5-0528-43a6-925c-ff5efb7d385b | 77ba0d4e-bc8d-4ae5-b0e2-848feafadadf | hdx_onboarding_user_validated | False
```
Apart from checking how this duplicates managed to appear we should also add unique constraints to sqlalchemy/ database so that this can't happen again.
|
priority
|
user related database issue with duplicating rows it seems that sometimes duplicates appear in the database for validation tokens table and for user extra table check below select from validation tokens where user id id user id token valid token f token f rows select from user extra where user id order by key id user id key value hdx onboarding details false hdx onboarding details false hdx onboarding first login false hdx onboarding first login false hdx onboarding follows false hdx onboarding follows false hdx onboarding friends false hdx onboarding friends false hdx onboarding org false hdx onboarding org false hdx onboarding user registered true hdx onboarding user registered true hdx onboarding user validated false hdx onboarding user validated false apart from checking how this duplicates managed to appear we should also add unique constraints to sqlalchemy database so that this can t happen again
| 1
|
130,149
| 5,110,289,616
|
IssuesEvent
|
2017-01-05 23:38:24
|
bethlakshmi/GBE2
|
https://api.github.com/repos/bethlakshmi/GBE2
|
closed
|
http://www.burlesque-expo.com/reports/staff_area dislays only shows
|
bug Estimation High Priority
|
http://www.burlesque-expo.com/reports/staff_area is supposed to be the place where staff leads can go to get their reports, but it only shows this year's shows. Where are the staff areas listed?
Related to #685
|
1.0
|
http://www.burlesque-expo.com/reports/staff_area dislays only shows - http://www.burlesque-expo.com/reports/staff_area is supposed to be the place where staff leads can go to get their reports, but it only shows this year's shows. Where are the staff areas listed?
Related to #685
|
priority
|
dislays only shows is supposed to be the place where staff leads can go to get their reports but it only shows this year s shows where are the staff areas listed related to
| 1
|
504,935
| 14,624,091,512
|
IssuesEvent
|
2020-12-23 05:24:26
|
ucfopen/Obojobo
|
https://api.github.com/repos/ucfopen/Obojobo
|
closed
|
yarn install fails due to missing pinned commit in ims-lti dependency on v11.1.0
|
bug dependencies high priority
|
Some time today, yarn install started failing:
> error Couldn’t find match for “d535b4c4aff8a43775d3d9c1f2069f0593b7779f” in “refs/heads/master,refs/tags/1.0.0,refs/tags/2.0.0,refs/tags/2.0.1,refs/tags/2.1.0,refs/tags/2.1.1,refs/tags/v2.1.2,refs/tags/v2.1.4,refs/tags/v2.1.5,refs/tags/v2.1.6,refs/tags/v3.0.0,refs/tags/v3.0.1,refs/tags/v3.0.2" for “https://github.com/ChristianMurphy/ims-lti.git”.
Github is displaying a warning message when one tries to view the commit here https://github.com/ChristianMurphy/ims-lti/commit/d535b4c4aff8a43775d3d9c1f2069f0593b7779f
|
1.0
|
yarn install fails due to missing pinned commit in ims-lti dependency on v11.1.0 - Some time today, yarn install started failing:
> error Couldn’t find match for “d535b4c4aff8a43775d3d9c1f2069f0593b7779f” in “refs/heads/master,refs/tags/1.0.0,refs/tags/2.0.0,refs/tags/2.0.1,refs/tags/2.1.0,refs/tags/2.1.1,refs/tags/v2.1.2,refs/tags/v2.1.4,refs/tags/v2.1.5,refs/tags/v2.1.6,refs/tags/v3.0.0,refs/tags/v3.0.1,refs/tags/v3.0.2" for “https://github.com/ChristianMurphy/ims-lti.git”.
Github is displaying a warning message when one tries to view the commit here https://github.com/ChristianMurphy/ims-lti/commit/d535b4c4aff8a43775d3d9c1f2069f0593b7779f
|
priority
|
yarn install fails due to missing pinned commit in ims lti dependency on some time today yarn install started failing error couldn’t find match for “ ” in “refs heads master refs tags refs tags refs tags refs tags refs tags refs tags refs tags refs tags refs tags refs tags refs tags refs tags for “ github is displaying a warning message when one tries to view the commit here
| 1
|
802,445
| 28,962,678,104
|
IssuesEvent
|
2023-05-10 04:55:03
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
closed
|
Haproxy crashes instance during scalability test due to high memory usage
|
Type: Bug Priority: 1 - High
|
**Describe the bug**
I started a scalability test, with 100 concurrent users, and haproxy crashed while the test was running. The behavior was that:
- api returns 500 errors for every request
- haproxy docker container was still running (and not restarting)
- last haproxy log reported all workers had been killed
**To Reproduce**
Steps to reproduce the behavior:
1. Start scalability test against current `master`
2. Watch as scalability test ends with errors
**Expected behavior**
Haproxy should not crash
**Logs**
Beginning of haproxy log - to check that we're indeed on the latest config:
```
Starting enhanced syslogd: rsyslogd.
global
maxconn 150000
spread-checks 5
lua-load-per-thread /usr/local/etc/haproxy/parse_basic.lua
lua-load-per-thread /usr/local/etc/haproxy/parse_cookie.lua
lua-load-per-thread /usr/local/etc/haproxy/replace_password.lua
log stdout len 65535 local2 debug
tune.bufsize 1638400
tune.http.maxhdr 1010
```
API Error:
```
REQ 20d65ffb-3c6a-4290-af76-5bd28d9a5cf3 35.178.234.114 ac2 GET /medic/_changes?style=all_docs&heartbeat=10000&since=0&limit=100 HTTP/1.0
2023-04-04 14:18:54 ERROR: Server error: RequestError: Error: connect ECONNREFUSED 10.0.1.11:5984
at new RequestError (/api/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/api/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/api/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/api/node_modules/request/request.js:185:22)
at Request.emit (node:events:513:28)
at Request.onRequestError (/api/node_modules/request/request.js:877:8)
at ClientRequest.emit (node:events:513:28)
at Socket.socketErrorListener (node:_http_client:494:9)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:157:8) {
name: 'RequestError',
message: 'Error: connect ECONNREFUSED 10.0.1.11:5984',
cause: Error: connect ECONNREFUSED 10.0.1.11:5984
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
[stack]: 'Error: connect ECONNREFUSED 10.0.1.11:5984\n' +
' at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)',
[message]: 'connect ECONNREFUSED 10.0.1.11:5984',
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.0.1.11',
port: 5984
},
error: Error: connect ECONNREFUSED 10.0.1.11:5984
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
[stack]: 'Error: connect ECONNREFUSED 10.0.1.11:5984\n' +
' at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)',
[message]: 'connect ECONNREFUSED 10.0.1.11:5984',
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.0.1.11',
port: 5984
},
}
RES 20d65ffb-3c6a-4290-af76-5bd28d9a5cf3 35.178.234.114 ac2 GET /medic/_changes?style=all_docs&heartbeat=10000&since=0&limit=100 HTTP/1.0 500 - 1.629 ms
```
Haproxy end of log:
```
<150>Apr 4 13:30:20 haproxy[25]: 10.0.1.3,couchdb-3.local,200,1743,6680,0,GET,/medic/org.couchdb.user%3Aac2?,api,medic,'-',495,1546,232,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
<150>Apr 4 13:30:21 haproxy[25]: 10.0.1.3,couchdb-2.local,200,2081,7009,0,GET,/_users/org.couchdb.user%3Aac2?,api,medic,'-',576,2080,313,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
[NOTICE] (1) : haproxy version is 2.6.12-f588462
[NOTICE] (1) : path to executable is /usr/local/sbin/haproxy
[ALERT] (1) : Current worker (25) exited with code 137 (Killed)
[ALERT] (1) : exit-on-failure: killing every processes with SIGTERM
```
Output of `docker ps`:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b2e118f87cc public.ecr.aws/s5s3h4s7/cht-nginx:4.1.0-alpha "/docker-entrypoint.…" 5 hours ago Up About an hour 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp compose_nginx_1
2f166344d02a public.ecr.aws/s5s3h4s7/cht-api:4.1.0-alpha "/bin/bash /api/dock…" 5 hours ago Up About an hour 5988/tcp compose_api_1
bd47151e7926 public.ecr.aws/s5s3h4s7/cht-sentinel:4.1.0-alpha "/bin/bash /sentinel…" 5 hours ago Up About an hour compose_sentinel_1
27959cce073d public.ecr.aws/medic/cht-haproxy:4.1.0-alpha "/entrypoint.sh hapr…" 5 hours ago Up About an hour 5984/tcp compose_haproxy_1
53f9ec2c028a public.ecr.aws/s5s3h4s7/cht-haproxy-healthcheck:4.1.0-alpha "/bin/sh -c \"/app/ch…" 5 hours ago Up About an hour compose_healthcheck_1
```
**Environment**
- Instance: scalability test instance
- App: haproxy
- Version: alpha ~4.2.0
**Additional context**
Manually restarting the haproxy container fixes the issue.
I don't have any insight into what the underlying issue is, I'll keep running tests to see how frequently this occurs.
In the worst case scenario, we should find a way to signal to the container to restart because the process was terminated.
It looks like this is being caused by Haproxy allocating resources for each opened connection, and those resources do not get freed, ending up bloating haproxy memory usage to 17GB, which is more than the test instance has allocated in total.
|
1.0
|
Haproxy crashes instance during scalability test due to high memory usage - **Describe the bug**
I started a scalability test, with 100 concurrent users, and haproxy crashed while the test was running. The behavior was that:
- api returns 500 errors for every request
- haproxy docker container was still running (and not restarting)
- last haproxy log reported all workers had been killed
**To Reproduce**
Steps to reproduce the behavior:
1. Start scalability test against current `master`
2. Watch as scalability test ends with errors
**Expected behavior**
Haproxy should not crash
**Logs**
Beginning of haproxy log - to check that we're indeed on the latest config:
```
Starting enhanced syslogd: rsyslogd.
global
maxconn 150000
spread-checks 5
lua-load-per-thread /usr/local/etc/haproxy/parse_basic.lua
lua-load-per-thread /usr/local/etc/haproxy/parse_cookie.lua
lua-load-per-thread /usr/local/etc/haproxy/replace_password.lua
log stdout len 65535 local2 debug
tune.bufsize 1638400
tune.http.maxhdr 1010
```
API Error:
```
REQ 20d65ffb-3c6a-4290-af76-5bd28d9a5cf3 35.178.234.114 ac2 GET /medic/_changes?style=all_docs&heartbeat=10000&since=0&limit=100 HTTP/1.0
2023-04-04 14:18:54 ERROR: Server error: RequestError: Error: connect ECONNREFUSED 10.0.1.11:5984
at new RequestError (/api/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/api/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/api/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/api/node_modules/request/request.js:185:22)
at Request.emit (node:events:513:28)
at Request.onRequestError (/api/node_modules/request/request.js:877:8)
at ClientRequest.emit (node:events:513:28)
at Socket.socketErrorListener (node:_http_client:494:9)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:157:8) {
name: 'RequestError',
message: 'Error: connect ECONNREFUSED 10.0.1.11:5984',
cause: Error: connect ECONNREFUSED 10.0.1.11:5984
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
[stack]: 'Error: connect ECONNREFUSED 10.0.1.11:5984\n' +
' at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)',
[message]: 'connect ECONNREFUSED 10.0.1.11:5984',
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.0.1.11',
port: 5984
},
error: Error: connect ECONNREFUSED 10.0.1.11:5984
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
[stack]: 'Error: connect ECONNREFUSED 10.0.1.11:5984\n' +
' at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)',
[message]: 'connect ECONNREFUSED 10.0.1.11:5984',
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.0.1.11',
port: 5984
},
}
RES 20d65ffb-3c6a-4290-af76-5bd28d9a5cf3 35.178.234.114 ac2 GET /medic/_changes?style=all_docs&heartbeat=10000&since=0&limit=100 HTTP/1.0 500 - 1.629 ms
```
Haproxy end of log:
```
<150>Apr 4 13:30:20 haproxy[25]: 10.0.1.3,couchdb-3.local,200,1743,6680,0,GET,/medic/org.couchdb.user%3Aac2?,api,medic,'-',495,1546,232,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
<150>Apr 4 13:30:21 haproxy[25]: 10.0.1.3,couchdb-2.local,200,2081,7009,0,GET,/_users/org.couchdb.user%3Aac2?,api,medic,'-',576,2080,313,'node-fetch/1.0 (+https://github.com/bitinn/node-fetch)'
[NOTICE] (1) : haproxy version is 2.6.12-f588462
[NOTICE] (1) : path to executable is /usr/local/sbin/haproxy
[ALERT] (1) : Current worker (25) exited with code 137 (Killed)
[ALERT] (1) : exit-on-failure: killing every processes with SIGTERM
```
Output of `docker ps`:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b2e118f87cc public.ecr.aws/s5s3h4s7/cht-nginx:4.1.0-alpha "/docker-entrypoint.…" 5 hours ago Up About an hour 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp compose_nginx_1
2f166344d02a public.ecr.aws/s5s3h4s7/cht-api:4.1.0-alpha "/bin/bash /api/dock…" 5 hours ago Up About an hour 5988/tcp compose_api_1
bd47151e7926 public.ecr.aws/s5s3h4s7/cht-sentinel:4.1.0-alpha "/bin/bash /sentinel…" 5 hours ago Up About an hour compose_sentinel_1
27959cce073d public.ecr.aws/medic/cht-haproxy:4.1.0-alpha "/entrypoint.sh hapr…" 5 hours ago Up About an hour 5984/tcp compose_haproxy_1
53f9ec2c028a public.ecr.aws/s5s3h4s7/cht-haproxy-healthcheck:4.1.0-alpha "/bin/sh -c \"/app/ch…" 5 hours ago Up About an hour compose_healthcheck_1
```
**Environment**
- Instance: scalability test instance
- App: haproxy
- Version: alpha ~4.2.0
**Additional context**
Manually restarting the haproxy container fixes the issue.
I don't have any insight into what the underlying issue is, I'll keep running tests to see how frequently this occurs.
In the worst case scenario, we should find a way to signal to the container to restart because the process was terminated.
It looks like this is being caused by Haproxy allocating resources for each opened connection, and those resources do not get freed, ending up bloating haproxy memory usage to 17GB, which is more than the test instance has allocated in total.
|
priority
|
haproxy crashes instance during scalability test due to high memory usage describe the bug i started a scalability test with concurrent users and haproxy crashed while the test was running the behavior was that api returns errors for every request haproxy docker container was still running and not restarting last haproxy log reported all workers had been killed to reproduce steps to reproduce the behavior start scalability test against current master watch as scalability test ends with errors expected behavior haproxy should not crash logs beginning of haproxy log to check that we re indeed on the latest config starting enhanced syslogd rsyslogd global maxconn spread checks lua load per thread usr local etc haproxy parse basic lua lua load per thread usr local etc haproxy parse cookie lua lua load per thread usr local etc haproxy replace password lua log stdout len debug tune bufsize tune http maxhdr api error req get medic changes style all docs heartbeat since limit http error server error requesterror error connect econnrefused at new requesterror api node modules request promise core lib errors js at request plumbing callback api node modules request promise core lib plumbing js at request rp callback api node modules request promise core lib plumbing js at self callback api node modules request request js at request emit node events at request onrequesterror api node modules request request js at clientrequest emit node events at socket socketerrorlistener node http client at socket emit node events at emiterrornt node internal streams destroy name requesterror message error connect econnrefused cause error connect econnrefused at tcpconnectwrap afterconnect node net error connect econnrefused n at tcpconnectwrap afterconnect node net connect econnrefused errno code econnrefused syscall connect address port error error connect econnrefused at tcpconnectwrap afterconnect node net error connect econnrefused n at tcpconnectwrap afterconnect node net connect econnrefused errno code econnrefused syscall connect address port res get medic changes style all docs heartbeat since limit http ms haproxy end of log apr haproxy couchdb local get medic org couchdb user api medic node fetch apr haproxy couchdb local get users org couchdb user api medic node fetch haproxy version is path to executable is usr local sbin haproxy current worker exited with code killed exit on failure killing every processes with sigterm output of docker ps container id image command created status ports names public ecr aws cht nginx alpha docker entrypoint … hours ago up about an hour tcp tcp tcp tcp compose nginx public ecr aws cht api alpha bin bash api dock… hours ago up about an hour tcp compose api public ecr aws cht sentinel alpha bin bash sentinel… hours ago up about an hour compose sentinel public ecr aws medic cht haproxy alpha entrypoint sh hapr… hours ago up about an hour tcp compose haproxy public ecr aws cht haproxy healthcheck alpha bin sh c app ch… hours ago up about an hour compose healthcheck environment instance scalability test instance app haproxy version alpha additional context manually restarting the haproxy container fixes the issue i don t have any insight into what the underlying issue is i ll keep running tests to see how frequently this occurs in the worst case scenario we should find a way to signal to the container to restart because the process was terminated it looks like this is being caused by haproxy allocating resources for each opened connection and those resources do not get freed ending up bloating haproxy memory usage to which is more than the test instance has allocated in total
| 1
|
530,073
| 15,415,271,710
|
IssuesEvent
|
2021-03-05 02:12:58
|
domialex/Sidekick
|
https://api.github.com/repos/domialex/Sidekick
|
closed
|
[Items] 'X Added Skill is a Jewel Socket' is not shown upon price checking
|
Priority: High Status: Available Type: Bug
|
Sample item. The affix is rather important
```
Rarity: Rare
Oblivion Heart
Medium Cluster Jewel
--------
Item Level: 70
--------
Adds 5 Passive Skills (enchant)
Added Small Passive Skills grant: +4% to Chaos Damage over Time Multiplier (enchant)
--------
1 Added Passive Skill is Brewed for Potency
1 Added Passive Skill is Circling Oblivion
1 Added Passive Skill is a Jewel Socket
--------
Place into an allocated Medium or Large Jewel Socket on the Passive Skill Tree. Added passives do not interact with jewel radiuses. Right click to remove from the Socket.
--------
```

|
1.0
|
[Items] 'X Added Skill is a Jewel Socket' is not shown upon price checking - Sample item. The affix is rather important
```
Rarity: Rare
Oblivion Heart
Medium Cluster Jewel
--------
Item Level: 70
--------
Adds 5 Passive Skills (enchant)
Added Small Passive Skills grant: +4% to Chaos Damage over Time Multiplier (enchant)
--------
1 Added Passive Skill is Brewed for Potency
1 Added Passive Skill is Circling Oblivion
1 Added Passive Skill is a Jewel Socket
--------
Place into an allocated Medium or Large Jewel Socket on the Passive Skill Tree. Added passives do not interact with jewel radiuses. Right click to remove from the Socket.
--------
```

|
priority
|
x added skill is a jewel socket is not shown upon price checking sample item the affix is rather important rarity rare oblivion heart medium cluster jewel item level adds passive skills enchant added small passive skills grant to chaos damage over time multiplier enchant added passive skill is brewed for potency added passive skill is circling oblivion added passive skill is a jewel socket place into an allocated medium or large jewel socket on the passive skill tree added passives do not interact with jewel radiuses right click to remove from the socket
| 1
|
314,664
| 9,601,194,842
|
IssuesEvent
|
2019-05-10 11:27:33
|
FundacionParaguaya/MentorApp
|
https://api.github.com/repos/FundacionParaguaya/MentorApp
|
closed
|
Unit Tests - Review and Improve
|
high priority
|
This includes better branch and line coverage of unit tests. There are components without any tests.
|
1.0
|
Unit Tests - Review and Improve - This includes better branch and line coverage of unit tests. There are components without any tests.
|
priority
|
unit tests review and improve this includes better branch and line coverage of unit tests there are components without any tests
| 1
|
220,985
| 7,372,666,360
|
IssuesEvent
|
2018-03-13 15:17:51
|
VanilleBid/weekly-saleor
|
https://api.github.com/repos/VanilleBid/weekly-saleor
|
opened
|
Be able to write and add notes to customer
|
Dashboard Highest Priority
|
Staff members must be able to write and add notes about a customer from the customer profile.
|
1.0
|
Be able to write and add notes to customer - Staff members must be able to write and add notes about a customer from the customer profile.
|
priority
|
be able to write and add notes to customer staff members must be able to write and add notes about a customer from the customer profile
| 1
|
830,753
| 32,022,685,404
|
IssuesEvent
|
2023-09-22 06:27:17
|
McBaws/comp
|
https://api.github.com/repos/McBaws/comp
|
opened
|
Optimize Dedupe
|
bug high priority
|
Optimize deduping of frames, but randoms in particular, its super slow rn. Script's probably using a stupid algorithm for deduping the frames, or there's a really high "distance between frames" requirement for randoms.
|
1.0
|
Optimize Dedupe - Optimize deduping of frames, but randoms in particular, its super slow rn. Script's probably using a stupid algorithm for deduping the frames, or there's a really high "distance between frames" requirement for randoms.
|
priority
|
optimize dedupe optimize deduping of frames but randoms in particular its super slow rn script s probably using a stupid algorithm for deduping the frames or there s a really high distance between frames requirement for randoms
| 1
|
268,046
| 8,402,580,323
|
IssuesEvent
|
2018-10-11 07:12:29
|
Icinga/icinga2
|
https://api.github.com/repos/Icinga/icinga2
|
closed
|
Not all Endpoints can't reconnect due to "Client TLS handshake failed" error after "reload or restart"
|
Cluster bug high-priority needs-feedback
|
Not all Endpoints can't reconnect due to "Client TLS handshake failed" error after "reload or restart"
After upgrading to Icinga-2.9.0 and 2.9.1 we ran into a huge problem with reconnecting to our endpoints.
Do a "systemctl restart icinga2", a "systemctl reload icinga2" or a reload in Icingaweb2 and Director-1.4.3.
The following is happening:
```
[2018-08-02 15:29:29 +0200] information/ExternalCommandListener: 'command' stopped.
[2018-08-02 15:31:10 +0200] information/WorkQueue: #10 (DaemonCommand::Run) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2018-08-02 15:31:12 +0200] information/ApiListener: 'api' started.
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 1 zone configuration files for zone 'mon-icinga2-02.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga2-02.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga2-02.nst.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.088238), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.472608).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 1 zone configuration files for zone 'mon-icinga2-03.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga2-03.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga2-03.nst.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.097991), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.484660).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 10 zone configuration files for zone 'mon-icinga2-01.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga-01.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga-01.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.178659), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.565199).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 12 zone configuration files for zone 'director-global' to '/var/lib/icinga2/api/zones/director-global'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/director-global' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.184592), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.571902).
[2018-08-02 15:31:12 +0200] information/ApiListener: Adding new listener on port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system1.example.org' via host '10.0.1.104' and port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system2.example.org' via host '10.0.2.104' and port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system3.example.org' via host '10.0.3.104' and port '5665'
...
[2018-08-02 15:31:14 +0200] information/ApiListener: Reconnecting to endpoint 'system1134.example.org' via host '10.5.1.101' and port '5665'
...
```
Between 15:31:12 and 15:31:14 a reconnect is triggered (and logged) for all 6897 endpoints.
For a few endpoint this is successful, the connection is established and config files are synced and updated.
```
[2018-08-02 15:31:22 +0200] information/ApiListener: Finished sending config file updates for endpoint 'system5.example.org' in zone 'system5.example.org'.
[2018-08-02 15:31:22 +0200] information/ApiListener: Syncing runtime objects to endpoint 'system5.example.org'.
```
Suddenly the CLient TLS handshakes appear
```
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.3.2.103]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.1.27.154]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.245.10.103]:5665)
Context:
(0) Handling new API client connection
...
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.127.11.99]:5665)
Context:
(0) Handling new API client connection
```
On client side (10.127.11.99) the following will occur in the logfile
```
...
[2018-08-02 15:31:23 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:33143)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga2-01.nst.example.org' have been received in the last 60 seconds.
[2018-08-02 15:31:31 +0200] warning/JsonRpcConnection: API client disconnected for identity 'mon-icinga2-01.nst.example.org'
[2018-08-02 15:31:31 +0200] warning/JsonRpcConnection: API client disconnected for identity 'mon-icinga2-01.nst.example.org'
[2018-08-02 15:31:31 +0200] warning/ApiListener: Removing API client for endpoint 'mon-icinga2-01.nst.example.org'. 0 API clients left.
...
[2018-08-02 15:32:24 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:39162)
Context:
(0) Handling new API client connection
[2018-08-02 15:32:44 +0200] information/ConfigObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
[2018-08-02 15:33:25 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:44709)
Context:
(0) Handling new API client connection
[2018-08-02 15:34:25 +0200] information/ApiListener: New client connection for identity 'mon-icinga2-01.nst.example.org' from [10.195.1.10]:51952
[2018-08-02 15:34:30 +0200] warning/TlsStream: TLS stream was disconnected.
[2018-08-02 15:34:30 +0200] warning/ApiListener: No data received on new API connection for identity 'mon-icinga2-01.nst.example.org'. Ensure that the remote endpoints are properly configured in a cluster setup.
Context:
(0) Handling new API client connection
[2018-08-02 15:35:13 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate: 0/s (0/min 0/5min 0/15min);
```
Also the API gets unresponsive now and you can't query something. System and database load isn't higher as in 2.8.4 at this time.
A few minutes later, some system succeed in reconneting and syncing, other not.
```
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system479.example.org' via host '10.52.217.105' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system325.example.org' via host '10.71.10.158' and port '5665'
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.5.169.2]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.11.1.3]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.8.64.43]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system123.example.org' via host '10.25.78.109' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system5.example.org' via host '10.61.80.159' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system12.example.org' via host '10.25.149.110' and port '5665'
```
Another few minutes later, about 1000 systems could do a successful reconnect without any TLS errors, but then TLS errors in logfile raises again. In half an hour there are only ~ 3000 endpoints reconnected, all others have to deal with the strange TLS handshake failures
```
[2018-08-02 15:31:23 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:33143)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga-01.nst.example.org' have been received in the last 60 seconds.
```
With 2.9.0 and 2.9.1 it isn't possible to reconnect all endpoints in a time effective manner.
A Rollback to 2.8.4 fixes this problem immediately. After approximately 5 minutes all endpoints are reconnected.
## Expected Behavior
Reconnection in 2.9.0 and 2.9.1 should work as in 2.8.x and before.
## Current Behavior
It's not possible to do reconnect for ALL endpoints. Only a few ( ~ 1500 till 2000) could do a successful reconnect. All others have to deal with TLS errors nd possible timeouts.
Additionally a restart or reload creates a process in <defunct> state
```
root 22070 823 0 15:30 ? 00:00:00 [icinga2] <defunct>
root 22257 823 0 12:02 ? 00:00:00 [icinga2] <defunct>
```
```
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga-01.nst.example.org' have been received in the last 60 seconds.
```
## Possible Solution
Rollback to 2.8.4 at the moment ...
## Steps to Reproduce (for bugs)
Hard to reproduce, because you need at least ~ 7000 endpoints and ~ 140.000 services connected.
## Context
## Your Environment
* Version used (`icinga2 --version`):
Rollback done to 2.8.4-1
```
icinga2 - The Icinga 2 network monitoring daemon (version: r2.8.4-1)
Copyright (c) 2012-2017 Icinga Development Team (https://www.icinga.com/)
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl2.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Application information:
Installation root: /usr
Sysconf directory: /etc
Run directory: /run
Local state directory: /var
Package data directory: /usr/share/icinga2
State path: /var/lib/icinga2/icinga2.state
Modified attributes path: /var/lib/icinga2/modified-attributes.conf
Objects path: /var/cache/icinga2/icinga2.debug
Vars path: /var/cache/icinga2/icinga2.vars
PID path: /run/icinga2/icinga2.pid
System information:
Platform: Debian GNU/Linux
Platform version: 8 (jessie)
Kernel: Linux
Kernel version: 3.16.0-6-amd64
Architecture: x86_64
Build information:
Compiler: GNU 4.9.2
Build host: 289f70f9cece
```
* Operating System and version:
* Debian 8, x86_64
* Enabled features (`icinga2 feature list`):
** Disabled features: compatlog debuglog elasticsearch gelf graphite livestatus opentsdb perfdata statusdata syslog
* Enabled features: api checker command ido-mysql influxdb mainlog notification
* Icinga Web 2 version and modules (System - About):
* Config validation (`icinga2 daemon -C`):
* If you run multiple Icinga 2 instances, the `zones.conf` file (or `icinga2 object list --type Endpoint` and `icinga2 object list --type Zone`) from all affected nodes.
* icinga2 object list --type Zone | grep Object | wc -l
q6898
* icinga2 object list --type Zone | grep Object | wc -l
6899
|
1.0
|
Not all Endpoints can't reconnect due to "Client TLS handshake failed" error after "reload or restart" - Not all Endpoints can't reconnect due to "Client TLS handshake failed" error after "reload or restart"
After upgrading to Icinga-2.9.0 and 2.9.1 we ran into a huge problem with reconnecting to our endpoints.
Do a "systemctl restart icinga2", a "systemctl reload icinga2" or a reload in Icingaweb2 and Director-1.4.3.
The following is happening:
```
[2018-08-02 15:29:29 +0200] information/ExternalCommandListener: 'command' stopped.
[2018-08-02 15:31:10 +0200] information/WorkQueue: #10 (DaemonCommand::Run) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2018-08-02 15:31:12 +0200] information/ApiListener: 'api' started.
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 1 zone configuration files for zone 'mon-icinga2-02.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga2-02.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga2-02.nst.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.088238), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.472608).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 1 zone configuration files for zone 'mon-icinga2-03.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga2-03.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga2-03.nst.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.097991), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.484660).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 10 zone configuration files for zone 'mon-icinga2-01.nst.example.org' to '/var/lib/icinga2/api/zones/mon-icinga-01.nst.example.org'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/mon-icinga-01.example.org' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.178659), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.565199).
[2018-08-02 15:31:12 +0200] information/ApiListener: Copying 12 zone configuration files for zone 'director-global' to '/var/lib/icinga2/api/zones/director-global'.
[2018-08-02 15:31:12 +0200] information/ApiListener: Applying configuration file update for path '/var/lib/icinga2/api/zones/director-global' (0 Bytes). Received timestamp '2018-08-02 15:31:12 +0200' (1533216672.184592), Current timestamp '2018-08-02 13:32:44 +0200' (1533209564.571902).
[2018-08-02 15:31:12 +0200] information/ApiListener: Adding new listener on port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system1.example.org' via host '10.0.1.104' and port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system2.example.org' via host '10.0.2.104' and port '5665'
[2018-08-02 15:31:12 +0200] information/ApiListener: Reconnecting to endpoint 'system3.example.org' via host '10.0.3.104' and port '5665'
...
[2018-08-02 15:31:14 +0200] information/ApiListener: Reconnecting to endpoint 'system1134.example.org' via host '10.5.1.101' and port '5665'
...
```
Between 15:31:12 and 15:31:14 a reconnect is triggered (and logged) for all 6897 endpoints.
For a few endpoint this is successful, the connection is established and config files are synced and updated.
```
[2018-08-02 15:31:22 +0200] information/ApiListener: Finished sending config file updates for endpoint 'system5.example.org' in zone 'system5.example.org'.
[2018-08-02 15:31:22 +0200] information/ApiListener: Syncing runtime objects to endpoint 'system5.example.org'.
```
Suddenly the CLient TLS handshakes appear
```
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.3.2.103]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.1.27.154]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.245.10.103]:5665)
Context:
(0) Handling new API client connection
...
[2018-08-02 15:31:22 +0200] critical/ApiListener: Client TLS handshake failed (to [10.127.11.99]:5665)
Context:
(0) Handling new API client connection
```
On client side (10.127.11.99) the following will occur in the logfile
```
...
[2018-08-02 15:31:23 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:33143)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga2-01.nst.example.org' have been received in the last 60 seconds.
[2018-08-02 15:31:31 +0200] warning/JsonRpcConnection: API client disconnected for identity 'mon-icinga2-01.nst.example.org'
[2018-08-02 15:31:31 +0200] warning/JsonRpcConnection: API client disconnected for identity 'mon-icinga2-01.nst.example.org'
[2018-08-02 15:31:31 +0200] warning/ApiListener: Removing API client for endpoint 'mon-icinga2-01.nst.example.org'. 0 API clients left.
...
[2018-08-02 15:32:24 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:39162)
Context:
(0) Handling new API client connection
[2018-08-02 15:32:44 +0200] information/ConfigObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
[2018-08-02 15:33:25 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:44709)
Context:
(0) Handling new API client connection
[2018-08-02 15:34:25 +0200] information/ApiListener: New client connection for identity 'mon-icinga2-01.nst.example.org' from [10.195.1.10]:51952
[2018-08-02 15:34:30 +0200] warning/TlsStream: TLS stream was disconnected.
[2018-08-02 15:34:30 +0200] warning/ApiListener: No data received on new API connection for identity 'mon-icinga2-01.nst.example.org'. Ensure that the remote endpoints are properly configured in a cluster setup.
Context:
(0) Handling new API client connection
[2018-08-02 15:35:13 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate: 0/s (0/min 0/5min 0/15min);
```
Also the API gets unresponsive now and you can't query something. System and database load isn't higher as in 2.8.4 at this time.
A few minutes later, some system succeed in reconneting and syncing, other not.
```
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system479.example.org' via host '10.52.217.105' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system325.example.org' via host '10.71.10.158' and port '5665'
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.5.169.2]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.11.1.3]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.8.64.43]:5665)
Context:
(0) Handling new API client connection
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system123.example.org' via host '10.25.78.109' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system5.example.org' via host '10.61.80.159' and port '5665'
[2018-08-02 15:45:39 +0200] information/ApiListener: Finished reconnecting to endpoint 'system12.example.org' via host '10.25.149.110' and port '5665'
```
Another few minutes later, about 1000 systems could do a successful reconnect without any TLS errors, but then TLS errors in logfile raises again. In half an hour there are only ~ 3000 endpoints reconnected, all others have to deal with the strange TLS handshake failures
```
[2018-08-02 15:31:23 +0200] critical/ApiListener: Client TLS handshake failed (from [10.195.1.10]:33143)
Context:
(0) Handling new API client connection
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga-01.nst.example.org' have been received in the last 60 seconds.
```
With 2.9.0 and 2.9.1 it isn't possible to reconnect all endpoints in a time effective manner.
A Rollback to 2.8.4 fixes this problem immediately. After approximately 5 minutes all endpoints are reconnected.
## Expected Behavior
Reconnection in 2.9.0 and 2.9.1 should work as in 2.8.x and before.
## Current Behavior
It's not possible to do reconnect for ALL endpoints. Only a few ( ~ 1500 till 2000) could do a successful reconnect. All others have to deal with TLS errors nd possible timeouts.
Additionally a restart or reload creates a process in <defunct> state
```
root 22070 823 0 15:30 ? 00:00:00 [icinga2] <defunct>
root 22257 823 0 12:02 ? 00:00:00 [icinga2] <defunct>
```
```
[2018-08-02 15:31:31 +0200] information/JsonRpcConnection: No messages for identity 'mon-icinga-01.nst.example.org' have been received in the last 60 seconds.
```
## Possible Solution
Rollback to 2.8.4 at the moment ...
## Steps to Reproduce (for bugs)
Hard to reproduce, because you need at least ~ 7000 endpoints and ~ 140.000 services connected.
## Context
## Your Environment
* Version used (`icinga2 --version`):
Rollback done to 2.8.4-1
```
icinga2 - The Icinga 2 network monitoring daemon (version: r2.8.4-1)
Copyright (c) 2012-2017 Icinga Development Team (https://www.icinga.com/)
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl2.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Application information:
Installation root: /usr
Sysconf directory: /etc
Run directory: /run
Local state directory: /var
Package data directory: /usr/share/icinga2
State path: /var/lib/icinga2/icinga2.state
Modified attributes path: /var/lib/icinga2/modified-attributes.conf
Objects path: /var/cache/icinga2/icinga2.debug
Vars path: /var/cache/icinga2/icinga2.vars
PID path: /run/icinga2/icinga2.pid
System information:
Platform: Debian GNU/Linux
Platform version: 8 (jessie)
Kernel: Linux
Kernel version: 3.16.0-6-amd64
Architecture: x86_64
Build information:
Compiler: GNU 4.9.2
Build host: 289f70f9cece
```
* Operating System and version:
* Debian 8, x86_64
* Enabled features (`icinga2 feature list`):
** Disabled features: compatlog debuglog elasticsearch gelf graphite livestatus opentsdb perfdata statusdata syslog
* Enabled features: api checker command ido-mysql influxdb mainlog notification
* Icinga Web 2 version and modules (System - About):
* Config validation (`icinga2 daemon -C`):
* If you run multiple Icinga 2 instances, the `zones.conf` file (or `icinga2 object list --type Endpoint` and `icinga2 object list --type Zone`) from all affected nodes.
* icinga2 object list --type Zone | grep Object | wc -l
q6898
* icinga2 object list --type Zone | grep Object | wc -l
6899
|
priority
|
not all endpoints can t reconnect due to client tls handshake failed error after reload or restart not all endpoints can t reconnect due to client tls handshake failed error after reload or restart after upgrading to icinga and we ran into a huge problem with reconnecting to our endpoints do a systemctl restart a systemctl reload or a reload in and director the following is happening information externalcommandlistener command stopped information workqueue daemoncommand run items rate s min information apilistener api started information apilistener copying zone configuration files for zone mon nst example org to var lib api zones mon nst example org information apilistener applying configuration file update for path var lib api zones mon nst example org bytes received timestamp current timestamp information apilistener copying zone configuration files for zone mon nst example org to var lib api zones mon nst example org information apilistener applying configuration file update for path var lib api zones mon nst example org bytes received timestamp current timestamp information apilistener copying zone configuration files for zone mon nst example org to var lib api zones mon icinga nst example org information apilistener applying configuration file update for path var lib api zones mon icinga example org bytes received timestamp current timestamp information apilistener copying zone configuration files for zone director global to var lib api zones director global information apilistener applying configuration file update for path var lib api zones director global bytes received timestamp current timestamp information apilistener adding new listener on port information apilistener reconnecting to endpoint example org via host and port information apilistener reconnecting to endpoint example org via host and port information apilistener reconnecting to endpoint example org via host and port information apilistener reconnecting to endpoint example org via host and port between and a reconnect is triggered and logged for all endpoints for a few endpoint this is successful the connection is established and config files are synced and updated information apilistener finished sending config file updates for endpoint example org in zone example org information apilistener syncing runtime objects to endpoint example org suddenly the client tls handshakes appear critical apilistener client tls handshake failed to context handling new api client connection critical apilistener client tls handshake failed to context handling new api client connection critical apilistener client tls handshake failed to context handling new api client connection critical apilistener client tls handshake failed to context handling new api client connection on client side the following will occur in the logfile critical apilistener client tls handshake failed from context handling new api client connection information jsonrpcconnection no messages for identity mon nst example org have been received in the last seconds warning jsonrpcconnection api client disconnected for identity mon nst example org warning jsonrpcconnection api client disconnected for identity mon nst example org warning apilistener removing api client for endpoint mon nst example org api clients left critical apilistener client tls handshake failed from context handling new api client connection information configobject dumping program state to file var lib state critical apilistener client tls handshake failed from context handling new api client connection information apilistener new client connection for identity mon nst example org from warning tlsstream tls stream was disconnected warning apilistener no data received on new api connection for identity mon nst example org ensure that the remote endpoints are properly configured in a cluster setup context handling new api client connection information workqueue apilistener syncqueue items rate s min also the api gets unresponsive now and you can t query something system and database load isn t higher as in at this time a few minutes later some system succeed in reconneting and syncing other not information apilistener finished reconnecting to endpoint example org via host and port information apilistener finished reconnecting to endpoint example org via host and port critical apilistener client tls handshake failed to context handling new api client connection critical apilistener client tls handshake failed to context handling new api client connection critical apilistener client tls handshake failed to context handling new api client connection information apilistener finished reconnecting to endpoint example org via host and port information apilistener finished reconnecting to endpoint example org via host and port information apilistener finished reconnecting to endpoint example org via host and port another few minutes later about systems could do a successful reconnect without any tls errors but then tls errors in logfile raises again in half an hour there are only endpoints reconnected all others have to deal with the strange tls handshake failures critical apilistener client tls handshake failed from context handling new api client connection information jsonrpcconnection no messages for identity mon icinga nst example org have been received in the last seconds with and it isn t possible to reconnect all endpoints in a time effective manner a rollback to fixes this problem immediately after approximately minutes all endpoints are reconnected expected behavior reconnection in and should work as in x and before current behavior it s not possible to do reconnect for all endpoints only a few till could do a successful reconnect all others have to deal with tls errors nd possible timeouts additionally a restart or reload creates a process in state root root information jsonrpcconnection no messages for identity mon icinga nst example org have been received in the last seconds possible solution rollback to at the moment steps to reproduce for bugs hard to reproduce because you need at least endpoints and services connected context your environment version used version rollback done to the icinga network monitoring daemon version copyright c icinga development team license gnu gpl version or later this is free software you are free to change and redistribute it there is no warranty to the extent permitted by law application information installation root usr sysconf directory etc run directory run local state directory var package data directory usr share state path var lib state modified attributes path var lib modified attributes conf objects path var cache debug vars path var cache vars pid path run pid system information platform debian gnu linux platform version jessie kernel linux kernel version architecture build information compiler gnu build host operating system and version debian enabled features feature list disabled features compatlog debuglog elasticsearch gelf graphite livestatus opentsdb perfdata statusdata syslog enabled features api checker command ido mysql influxdb mainlog notification icinga web version and modules system about config validation daemon c if you run multiple icinga instances the zones conf file or object list type endpoint and object list type zone from all affected nodes object list type zone grep object wc l object list type zone grep object wc l
| 1
|
675,225
| 23,085,114,155
|
IssuesEvent
|
2022-07-26 10:40:20
|
ElixirTeSS/TeSS
|
https://api.github.com/repos/ElixirTeSS/TeSS
|
closed
|
ELIXIR AAI Deprecation
|
bug priority-high
|
ELIXIR AAI is going to be deprecated.
There is going to be a call with more specific details but for now we have 12 months or about until the end of the year.
|
1.0
|
ELIXIR AAI Deprecation - ELIXIR AAI is going to be deprecated.
There is going to be a call with more specific details but for now we have 12 months or about until the end of the year.
|
priority
|
elixir aai deprecation elixir aai is going to be deprecated there is going to be a call with more specific details but for now we have months or about until the end of the year
| 1
|
79,413
| 3,535,539,533
|
IssuesEvent
|
2016-01-16 16:01:34
|
pupil-labs/pupil
|
https://api.github.com/repos/pupil-labs/pupil
|
closed
|
3D Eye Model: model lifecycle
|
dev task priority:high
|
create new model if avg performance slope is negative by (threshold) or performance below THRESHOLD.
replace model if performance is better and maturity is above threshold
if active model performance is back to above good_threshold:
kill other models
|
1.0
|
3D Eye Model: model lifecycle - create new model if avg performance slope is negative by (threshold) or performance below THRESHOLD.
replace model if performance is better and maturity is above threshold
if active model performance is back to above good_threshold:
kill other models
|
priority
|
eye model model lifecycle create new model if avg performance slope is negative by threshold or performance below threshold replace model if performance is better and maturity is above threshold if active model performance is back to above good threshold kill other models
| 1
|
254,257
| 8,071,925,967
|
IssuesEvent
|
2018-08-06 14:33:57
|
gluster/glusterd2
|
https://api.github.com/repos/gluster/glusterd2
|
closed
|
not able to start glusterd2
|
FW: Events GCS-Blocker Gluster 4.2 bug priority: high
|
Getting Data race while starting glusterd2
Logs:
/var/run/gluster/glusterd2.socket server=sunrpc source="[server.go:137:sunrpc.(*SunRPC).acceptLoop]" transport=unix
INFO[2018-05-02 14:37:29.220650] Started GlusterD ReST server ip:port="192.168.122.57:24007" source="[rest.go:91:rest.(*GDRest).Serve]"
INFO[2018-05-02 14:37:29.220835] started server address="192.168.122.57:24007" server=sunrpc source="[server.go:137:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
WARNING: DATA RACE
Write at 0x00c4207a0510 by goroutine 135:
github.com/gluster/glusterd2/glusterd2/events.handleEvents()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:98 +0x112
github.com/gluster/glusterd2/glusterd2/events.Register.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:77 +0x4c
Previous read at 0x00c4207a0510 by goroutine 50:
github.com/gluster/glusterd2/glusterd2/events.handleEvents.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:102 +0x38
Goroutine 135 (running) created at:
github.com/gluster/glusterd2/glusterd2/events.Register()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:76 +0xb3
github.com/gluster/glusterd2/glusterd2/events.StartGlobal()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/global.go:100 +0x67
github.com/gluster/glusterd2/glusterd2/events.Start()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/events.go:41 +0x2f
main.main()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/main.go:105 +0x65b
Goroutine 50 (finished) created at:
github.com/gluster/glusterd2/glusterd2/events.handleEvents()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:101 +0x1d5
github.com/gluster/glusterd2/glusterd2/events.Register.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:77 +0x4c
|
1.0
|
not able to start glusterd2 - Getting Data race while starting glusterd2
Logs:
/var/run/gluster/glusterd2.socket server=sunrpc source="[server.go:137:sunrpc.(*SunRPC).acceptLoop]" transport=unix
INFO[2018-05-02 14:37:29.220650] Started GlusterD ReST server ip:port="192.168.122.57:24007" source="[rest.go:91:rest.(*GDRest).Serve]"
INFO[2018-05-02 14:37:29.220835] started server address="192.168.122.57:24007" server=sunrpc source="[server.go:137:sunrpc.(*SunRPC).acceptLoop]" transport=tcp
WARNING: DATA RACE
Write at 0x00c4207a0510 by goroutine 135:
github.com/gluster/glusterd2/glusterd2/events.handleEvents()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:98 +0x112
github.com/gluster/glusterd2/glusterd2/events.Register.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:77 +0x4c
Previous read at 0x00c4207a0510 by goroutine 50:
github.com/gluster/glusterd2/glusterd2/events.handleEvents.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:102 +0x38
Goroutine 135 (running) created at:
github.com/gluster/glusterd2/glusterd2/events.Register()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:76 +0xb3
github.com/gluster/glusterd2/glusterd2/events.StartGlobal()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/global.go:100 +0x67
github.com/gluster/glusterd2/glusterd2/events.Start()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/events.go:41 +0x2f
main.main()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/main.go:105 +0x65b
Goroutine 50 (finished) created at:
github.com/gluster/glusterd2/glusterd2/events.handleEvents()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:101 +0x1d5
github.com/gluster/glusterd2/glusterd2/events.Register.func1()
/home/vagrant/src/github.com/gluster/glusterd2/glusterd2/events/eventhandler.go:77 +0x4c
|
priority
|
not able to start getting data race while starting logs var run gluster socket server sunrpc source transport unix info started glusterd rest server ip port source info started server address server sunrpc source transport tcp warning data race write at by goroutine github com gluster events handleevents home vagrant src github com gluster events eventhandler go github com gluster events register home vagrant src github com gluster events eventhandler go previous read at by goroutine github com gluster events handleevents home vagrant src github com gluster events eventhandler go goroutine running created at github com gluster events register home vagrant src github com gluster events eventhandler go github com gluster events startglobal home vagrant src github com gluster events global go github com gluster events start home vagrant src github com gluster events events go main main home vagrant src github com gluster main go goroutine finished created at github com gluster events handleevents home vagrant src github com gluster events eventhandler go github com gluster events register home vagrant src github com gluster events eventhandler go
| 1
|
121,424
| 4,816,171,657
|
IssuesEvent
|
2016-11-04 09:07:12
|
highcharts/highcharts-editor
|
https://api.github.com/repos/highcharts/highcharts-editor
|
closed
|
resetting a series color shows the default array
|
bug high priority
|
resetting a series color shows the default array of colors instead of one single color:

|
1.0
|
resetting a series color shows the default array - resetting a series color shows the default array of colors instead of one single color:

|
priority
|
resetting a series color shows the default array resetting a series color shows the default array of colors instead of one single color
| 1
|
327,369
| 9,974,721,980
|
IssuesEvent
|
2019-07-09 11:21:11
|
OpenSRP/opensrp-client-reveal
|
https://api.github.com/repos/OpenSRP/opensrp-client-reveal
|
closed
|
Update Family Name Checkbox
|
Priority: High bug
|
Family member registration module >Add Family Member Registration form - We need to update the checkbox to get the surname instead of the first name, reverting it back to the feature that was originally there.
|
1.0
|
Update Family Name Checkbox - Family member registration module >Add Family Member Registration form - We need to update the checkbox to get the surname instead of the first name, reverting it back to the feature that was originally there.
|
priority
|
update family name checkbox family member registration module add family member registration form we need to update the checkbox to get the surname instead of the first name reverting it back to the feature that was originally there
| 1
|
512,235
| 14,890,951,294
|
IssuesEvent
|
2021-01-21 00:01:12
|
danistark1/weatherStationApiSymfony
|
https://api.github.com/repos/danistark1/weatherStationApiSymfony
|
closed
|
Logging: add error logging
|
8 pts Priority: High enhancement
|
- Create a logging table to log application errors.
- Use Monolog logging
- Update all methods to use the new logger
|
1.0
|
Logging: add error logging - - Create a logging table to log application errors.
- Use Monolog logging
- Update all methods to use the new logger
|
priority
|
logging add error logging create a logging table to log application errors use monolog logging update all methods to use the new logger
| 1
|
143,975
| 5,533,564,644
|
IssuesEvent
|
2017-03-21 13:39:48
|
CS2103JAN2017-T09-B4/main
|
https://api.github.com/repos/CS2103JAN2017-T09-B4/main
|
closed
|
Activate the application quickly using a keyboard shortcut
|
priority.high type.story
|
So that I can access my task manager conveniently and efficiently
|
1.0
|
Activate the application quickly using a keyboard shortcut - So that I can access my task manager conveniently and efficiently
|
priority
|
activate the application quickly using a keyboard shortcut so that i can access my task manager conveniently and efficiently
| 1
|
808,136
| 30,034,880,675
|
IssuesEvent
|
2023-06-27 12:07:47
|
asastats/channel
|
https://api.github.com/repos/asastats/channel
|
closed
|
Add Cometa's farming(THC/COOP>PEPE+ALGO) and staking(TENDIES) programs
|
feature high priority addressed
|
By Milesmile in [Discord](https://discord.com/channels/906917846754418770/908054304332603402/1123177115450343454):
> New farms and stake on Cometa. Seems mainly tendies.
THC-COOP LP > PEPE + ALGO Farming program:
Feature link:
https://app.cometa.farm/
Application ID:
1134936223
---
TENDIES > TENDIES Staking program:
Feature link:
https://app.cometa.farm/stake
Application ID:
1134745460
|
1.0
|
Add Cometa's farming(THC/COOP>PEPE+ALGO) and staking(TENDIES) programs - By Milesmile in [Discord](https://discord.com/channels/906917846754418770/908054304332603402/1123177115450343454):
> New farms and stake on Cometa. Seems mainly tendies.
THC-COOP LP > PEPE + ALGO Farming program:
Feature link:
https://app.cometa.farm/
Application ID:
1134936223
---
TENDIES > TENDIES Staking program:
Feature link:
https://app.cometa.farm/stake
Application ID:
1134745460
|
priority
|
add cometa s farming thc coop pepe algo and staking tendies programs by milesmile in new farms and stake on cometa seems mainly tendies thc coop lp pepe algo farming program feature link application id tendies tendies staking program feature link application id
| 1
|
308,718
| 9,442,719,928
|
IssuesEvent
|
2019-04-15 07:37:29
|
canonical-websites/www.ubuntu.com
|
https://api.github.com/repos/canonical-websites/www.ubuntu.com
|
opened
|
Takeovers are not visible on Android
|
Priority: High
|
Two takeovers are not visible as they have white text on a white background:
- Transforming financial services with multi-cloud
- Optimising IoT bandwidth with delta updates
This only appears on an Android, Chrome.
|
1.0
|
Takeovers are not visible on Android - Two takeovers are not visible as they have white text on a white background:
- Transforming financial services with multi-cloud
- Optimising IoT bandwidth with delta updates
This only appears on an Android, Chrome.
|
priority
|
takeovers are not visible on android two takeovers are not visible as they have white text on a white background transforming financial services with multi cloud optimising iot bandwidth with delta updates this only appears on an android chrome
| 1
|
351,138
| 10,513,078,334
|
IssuesEvent
|
2019-09-27 19:35:37
|
canmet-energy/btap_tasks
|
https://api.github.com/repos/canmet-energy/btap_tasks
|
opened
|
BTAPResults: Costing JSON reorg.
|
Costing Priority High
|
### Description
Currently the costing JSON tree is inconsistent. This causes difficulty when using Tableau, PowerBI and Python. This task is to flatten the data structure to consist of a top level that will have building wide information or single data, and one level below which will contain table-row (array) data.
Below is a sample but use your judgement on how it should be put together. Ideally, data would be organized by space/zone/air_loop/plant_loop tables.
Tables will not have totals or average rows.. that data should be kept on the top level if needed.
```json
{
"rs_means_city": "PRINCE GEORGE",
"rs_means_prov": "BRITISH COLUMBIA",
"surface_opaque_table":[],
"surface_fenestration_table":[],
"space_table":[], // This will include space level costs (lighting for example)...items here should be summed up in the thermal zone table.
"thermal_zone_table":[], // This will include thermal zone level costs if any must include multipliers.
"thermal_zone_equipment":[], // this will include zone equipment costs. baseboards, PTACs, etc. Must have name of themal zone
"air_loop_table":[], // This will include all ductwork and other items that make up the airloop. index by system name.
"air_loop_equipment":[], // This will include all the airloop equipment, unit costs, multipliers and totals.
"plant_loop_table":[], // this will include all plant loop information and costing (pipe length, cost, etc)
"plant_loop_equipment_table":[], // this will include plant loop cost items (of type Pumps, boilers, chilllers, etc, blanks are okay. Should use OS object name suffix as type. )
"coil_table":[], // this will include all coils.. not sure if this is a plant or zone item.. so keeping it here? If so zone/loop name should be. blanks are ok
"fan_table":[], //We currently only have a supply fan for each air loop.. should this be broken out for when we have more fans in a loop or elsewhere?
"costing_envelope_cost": 377239.91,
"costing_lighting_cost": 377239.91,
"costing_hvac_cost": 377239.91,
"costing_shw_cost": 377239.91
}
```
### Approach
Create a new_costing hash so we can have the old and new output format can be used concurrently until everything is tested and ready to go.
### Testing Plan
### Waiting On
### Repositories Involved
https://github.com/NREL/openstudio-standards/tree/nrcan
25.001 |Cold climate heat pumps for space and hot water heating |Jeremy Sager
|
1.0
|
BTAPResults: Costing JSON reorg. - ### Description
Currently the costing JSON tree is inconsistent. This causes difficulty when using Tableau, PowerBI and Python. This task is to flatten the data structure to consist of a top level that will have building wide information or single data, and one level below which will contain table-row (array) data.
Below is a sample but use your judgement on how it should be put together. Ideally, data would be organized by space/zone/air_loop/plant_loop tables.
Tables will not have totals or average rows.. that data should be kept on the top level if needed.
```json
{
"rs_means_city": "PRINCE GEORGE",
"rs_means_prov": "BRITISH COLUMBIA",
"surface_opaque_table":[],
"surface_fenestration_table":[],
"space_table":[], // This will include space level costs (lighting for example)...items here should be summed up in the thermal zone table.
"thermal_zone_table":[], // This will include thermal zone level costs if any must include multipliers.
"thermal_zone_equipment":[], // this will include zone equipment costs. baseboards, PTACs, etc. Must have name of themal zone
"air_loop_table":[], // This will include all ductwork and other items that make up the airloop. index by system name.
"air_loop_equipment":[], // This will include all the airloop equipment, unit costs, multipliers and totals.
"plant_loop_table":[], // this will include all plant loop information and costing (pipe length, cost, etc)
"plant_loop_equipment_table":[], // this will include plant loop cost items (of type Pumps, boilers, chilllers, etc, blanks are okay. Should use OS object name suffix as type. )
"coil_table":[], // this will include all coils.. not sure if this is a plant or zone item.. so keeping it here? If so zone/loop name should be. blanks are ok
"fan_table":[], //We currently only have a supply fan for each air loop.. should this be broken out for when we have more fans in a loop or elsewhere?
"costing_envelope_cost": 377239.91,
"costing_lighting_cost": 377239.91,
"costing_hvac_cost": 377239.91,
"costing_shw_cost": 377239.91
}
```
### Approach
Create a new_costing hash so we can have the old and new output format can be used concurrently until everything is tested and ready to go.
### Testing Plan
### Waiting On
### Repositories Involved
https://github.com/NREL/openstudio-standards/tree/nrcan
25.001 |Cold climate heat pumps for space and hot water heating |Jeremy Sager
|
priority
|
btapresults costing json reorg description currently the costing json tree is inconsistent this causes difficulty when using tableau powerbi and python this task is to flatten the data structure to consist of a top level that will have building wide information or single data and one level below which will contain table row array data below is a sample but use your judgement on how it should be put together ideally data would be organized by space zone air loop plant loop tables tables will not have totals or average rows that data should be kept on the top level if needed json rs means city prince george rs means prov british columbia surface opaque table surface fenestration table space table this will include space level costs lighting for example items here should be summed up in the thermal zone table thermal zone table this will include thermal zone level costs if any must include multipliers thermal zone equipment this will include zone equipment costs baseboards ptacs etc must have name of themal zone air loop table this will include all ductwork and other items that make up the airloop index by system name air loop equipment this will include all the airloop equipment unit costs multipliers and totals plant loop table this will include all plant loop information and costing pipe length cost etc plant loop equipment table this will include plant loop cost items of type pumps boilers chilllers etc blanks are okay should use os object name suffix as type coil table this will include all coils not sure if this is a plant or zone item so keeping it here if so zone loop name should be blanks are ok fan table we currently only have a supply fan for each air loop should this be broken out for when we have more fans in a loop or elsewhere costing envelope cost costing lighting cost costing hvac cost costing shw cost approach create a new costing hash so we can have the old and new output format can be used concurrently until everything is tested and ready to go testing plan waiting on repositories involved cold climate heat pumps for space and hot water heating jeremy sager
| 1
|
212,058
| 7,227,821,802
|
IssuesEvent
|
2018-02-11 01:14:08
|
Templarian/MaterialDesign
|
https://api.github.com/repos/Templarian/MaterialDesign
|
closed
|
Help Icon :: Lifebouy Ring
|
High Priority Icon Request
|
It occurs to me that there isn't a clear "help" icon that is suitable when linking to a support department for example. The traditional lifebouy ring / donut would be a better fit than anything containing a question-mark from the current icon-set. The question mark icons, I feel, are more context driven help indicators, rather than an icon indicating general help.
I would suggest the icon of a lifebouy ring be created with the standard contrasted/striped pattern (it usually exists as a two-tone of red and white when shown in colour), with the optional potential of adding an encompassing string which can either be illustrated over the ring itself (creating a void) or surrounding the ring sometimes shown as 4 "ears" that loop away from the ring.
|
1.0
|
Help Icon :: Lifebouy Ring - It occurs to me that there isn't a clear "help" icon that is suitable when linking to a support department for example. The traditional lifebouy ring / donut would be a better fit than anything containing a question-mark from the current icon-set. The question mark icons, I feel, are more context driven help indicators, rather than an icon indicating general help.
I would suggest the icon of a lifebouy ring be created with the standard contrasted/striped pattern (it usually exists as a two-tone of red and white when shown in colour), with the optional potential of adding an encompassing string which can either be illustrated over the ring itself (creating a void) or surrounding the ring sometimes shown as 4 "ears" that loop away from the ring.
|
priority
|
help icon lifebouy ring it occurs to me that there isn t a clear help icon that is suitable when linking to a support department for example the traditional lifebouy ring donut would be a better fit than anything containing a question mark from the current icon set the question mark icons i feel are more context driven help indicators rather than an icon indicating general help i would suggest the icon of a lifebouy ring be created with the standard contrasted striped pattern it usually exists as a two tone of red and white when shown in colour with the optional potential of adding an encompassing string which can either be illustrated over the ring itself creating a void or surrounding the ring sometimes shown as ears that loop away from the ring
| 1
|
185,545
| 6,724,938,853
|
IssuesEvent
|
2017-10-17 01:52:25
|
CatherineEvelyn/wedding-planner
|
https://api.github.com/repos/CatherineEvelyn/wedding-planner
|
opened
|
Create login session
|
priority: 1 high
|
## Problem / motivation
Have the login button on the header to toogle between login and logout. If user is currently logged in, the button will be a logout button, if user is not currently logged in, the button will be a login button. Also, user-account.html and vendor-account.html need to be password protected (only accessible when user is logged in).
## Details
* **Role:** Back end.
|
1.0
|
Create login session - ## Problem / motivation
Have the login button on the header to toogle between login and logout. If user is currently logged in, the button will be a logout button, if user is not currently logged in, the button will be a login button. Also, user-account.html and vendor-account.html need to be password protected (only accessible when user is logged in).
## Details
* **Role:** Back end.
|
priority
|
create login session problem motivation have the login button on the header to toogle between login and logout if user is currently logged in the button will be a logout button if user is not currently logged in the button will be a login button also user account html and vendor account html need to be password protected only accessible when user is logged in details role back end
| 1
|
804,285
| 29,482,465,344
|
IssuesEvent
|
2023-06-02 07:08:01
|
opensquare-network/subsquare
|
https://api.github.com/repos/opensquare-network/subsquare
|
closed
|
ReactDOM.render is no longer supported in React 18
|
bug priority:high
|
How to reproduce, test interlay: http://127.0.0.1:7002/democracy/referendum/53

|
1.0
|
ReactDOM.render is no longer supported in React 18 - How to reproduce, test interlay: http://127.0.0.1:7002/democracy/referendum/53

|
priority
|
reactdom render is no longer supported in react how to reproduce test interlay
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.