Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
855
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
13 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
240k
binary_label
int64
0
1
761,224
26,672,532,099
IssuesEvent
2023-01-26 11:35:18
MattTheLegoman/RealmsInExile
https://api.github.com/repos/MattTheLegoman/RealmsInExile
closed
New Government(s) Issue
bug priority: high
The despotic and orcish governments, although coded correctly, aren't being given out to everyone assigned to them. For instance, only Gundabad receives the orcish government and Abrakahn doesn't receive the despotic government.
1.0
New Government(s) Issue - The despotic and orcish governments, although coded correctly, aren't being given out to everyone assigned to them. For instance, only Gundabad receives the orcish government and Abrakahn doesn't receive the despotic government.
priority
new government s issue the despotic and orcish governments although coded correctly aren t being given out to everyone assigned to them for instance only gundabad receives the orcish government and abrakahn doesn t receive the despotic government
1
464,660
13,337,457,486
IssuesEvent
2020-08-28 09:17:41
zeebe-io/zeebe
https://api.github.com/repos/zeebe-io/zeebe
closed
Run chaos tests on our gke cluster
Priority: High Status: Ready Type: Maintenance
**Description** Currently we run our chaos tests in the infra cluster. This has some requirements and limitations like: * tests should not run to long * we can't easily run/deploy via helm, we have to run the template command to get the yaml files, which then need to be applied to the cluster * elastic is not deployed during the tests * we have no real insights like logs, metrics etc. I would suggest that we try to run our chaos tests still in the jenkins pipeline but deploy them in our google cloud, in a namespace called `zeebe-chaos`. **Pro's** * We can add tests with elastic * We can then use elastic to verify certain things, like all instances are completed or something similar * We are able to root cause issues, since we have access to the logs * We can observe the metrics via grafana **Cons** Right out of my head I have none. Probably we need to find a way to do that in our cluster. But I assume it is enough to store the credentials in our jenkins ?
1.0
Run chaos tests on our gke cluster - **Description** Currently we run our chaos tests in the infra cluster. This has some requirements and limitations like: * tests should not run to long * we can't easily run/deploy via helm, we have to run the template command to get the yaml files, which then need to be applied to the cluster * elastic is not deployed during the tests * we have no real insights like logs, metrics etc. I would suggest that we try to run our chaos tests still in the jenkins pipeline but deploy them in our google cloud, in a namespace called `zeebe-chaos`. **Pro's** * We can add tests with elastic * We can then use elastic to verify certain things, like all instances are completed or something similar * We are able to root cause issues, since we have access to the logs * We can observe the metrics via grafana **Cons** Right out of my head I have none. Probably we need to find a way to do that in our cluster. But I assume it is enough to store the credentials in our jenkins ?
priority
run chaos tests on our gke cluster description currently we run our chaos tests in the infra cluster this has some requirements and limitations like tests should not run to long we can t easily run deploy via helm we have to run the template command to get the yaml files which then need to be applied to the cluster elastic is not deployed during the tests we have no real insights like logs metrics etc i would suggest that we try to run our chaos tests still in the jenkins pipeline but deploy them in our google cloud in a namespace called zeebe chaos pro s we can add tests with elastic we can then use elastic to verify certain things like all instances are completed or something similar we are able to root cause issues since we have access to the logs we can observe the metrics via grafana cons right out of my head i have none probably we need to find a way to do that in our cluster but i assume it is enough to store the credentials in our jenkins
1
225,955
7,496,651,278
IssuesEvent
2018-04-08 11:47:48
CS2103JAN2018-F09-B3/main
https://api.github.com/repos/CS2103JAN2018-F09-B3/main
closed
Unable to use coin code to sell coins other than the first one
priority.high type.bug
Inputting any other valid coin code in the sell command will cause CoinBook to sell the first listed coin. Steps: 1. Input "buy Alex Yeoh a/50.0" 2. CoinBook gives a positive result "Bought: Alex Yeoh Amount: 50.0 Price: 1.0 Tags: [friends]" 3. Input "sell Bernice Yu a/5.0" 4. CoinBook gives a positive result "Sold: Alex Yeoh Amount: 45.0 Price: 1.0 Tags: [friends]" which is not correct <sub>[original: nus-cs2103-AY1718S2/pe-round1#306]</sub> Issue created by: @cxingkai
1.0
Unable to use coin code to sell coins other than the first one - Inputting any other valid coin code in the sell command will cause CoinBook to sell the first listed coin. Steps: 1. Input "buy Alex Yeoh a/50.0" 2. CoinBook gives a positive result "Bought: Alex Yeoh Amount: 50.0 Price: 1.0 Tags: [friends]" 3. Input "sell Bernice Yu a/5.0" 4. CoinBook gives a positive result "Sold: Alex Yeoh Amount: 45.0 Price: 1.0 Tags: [friends]" which is not correct <sub>[original: nus-cs2103-AY1718S2/pe-round1#306]</sub> Issue created by: @cxingkai
priority
unable to use coin code to sell coins other than the first one inputting any other valid coin code in the sell command will cause coinbook to sell the first listed coin steps input buy alex yeoh a coinbook gives a positive result bought alex yeoh amount price tags input sell bernice yu a coinbook gives a positive result sold alex yeoh amount price tags which is not correct issue created by cxingkai
1
516,933
14,991,086,420
IssuesEvent
2021-01-29 07:43:11
FLEXIcontent/flexicontent-cck
https://api.github.com/repos/FLEXIcontent/flexicontent-cck
opened
Galery script broke form tipping
Priority High bug
i found a strange problem in my demo site if we use gallerific script in item view we can add space in contact form if i switch to elaxtic slide no problem testing : https://flexidemo.com3elles.com/realestate/immobilier-phi-adhoc-toulouse/apartements/item/80-appartement-austria.html go to tab your agent tab
1.0
Galery script broke form tipping - i found a strange problem in my demo site if we use gallerific script in item view we can add space in contact form if i switch to elaxtic slide no problem testing : https://flexidemo.com3elles.com/realestate/immobilier-phi-adhoc-toulouse/apartements/item/80-appartement-austria.html go to tab your agent tab
priority
galery script broke form tipping i found a strange problem in my demo site if we use gallerific script in item view we can add space in contact form if i switch to elaxtic slide no problem testing go to tab your agent tab
1
759,010
26,577,012,188
IssuesEvent
2023-01-21 23:48:36
encorelab/ck-board
https://api.github.com/repos/encorelab/ck-board
opened
Task Workspace submit button bug
bug high priority
Students cannot submit work if their web socket connection is interrupted To reproduce: 1. Open the workspace containing an active task 2. Disconnect from wifi 3. Reconnect to wifi 4. Create a comment in the task workspace
1.0
Task Workspace submit button bug - Students cannot submit work if their web socket connection is interrupted To reproduce: 1. Open the workspace containing an active task 2. Disconnect from wifi 3. Reconnect to wifi 4. Create a comment in the task workspace
priority
task workspace submit button bug students cannot submit work if their web socket connection is interrupted to reproduce open the workspace containing an active task disconnect from wifi reconnect to wifi create a comment in the task workspace
1
113,049
4,542,050,556
IssuesEvent
2016-09-09 19:53:35
bloomberg/bucklescript
https://api.github.com/repos/bloomberg/bucklescript
closed
bspack better error message when file not found
enhancement PRIORITY:HIGH
``` bin>bspack -I ../ocaml/parsing/ -I common -I ext -I syntax -bs-main bspack_main.ml -o bin/bsppx0.ml File "_none_", line 1: Error: I/O error: syntax: No such file or directory ```
1.0
bspack better error message when file not found - ``` bin>bspack -I ../ocaml/parsing/ -I common -I ext -I syntax -bs-main bspack_main.ml -o bin/bsppx0.ml File "_none_", line 1: Error: I/O error: syntax: No such file or directory ```
priority
bspack better error message when file not found bin bspack i ocaml parsing i common i ext i syntax bs main bspack main ml o bin ml file none line error i o error syntax no such file or directory
1
128,247
5,051,621,481
IssuesEvent
2016-12-20 22:29:19
larsiusprime/tdrpg-bugs
https://api.github.com/repos/larsiusprime/tdrpg-bugs
closed
Meteor Strike occurs randomly when zapping Blue Crystals, can kill weak party members.
1 Battle bug DQ CORE Priority HIGH
I've noticed a few times now that long after enemies have moved out of range of my warriors I'll detonate a blue crystal vein next to the enemies and some of my warriors would die even though they are on the opposite side of the screen (like 20 or so squares away). When this happens some of the Healers get stuck repeatedly healing nothing. I've tried recording it but I can't seem to intentionally reproduce it :/ Windows 10 x64 Steam Latest stable recommended version
1.0
Meteor Strike occurs randomly when zapping Blue Crystals, can kill weak party members. - I've noticed a few times now that long after enemies have moved out of range of my warriors I'll detonate a blue crystal vein next to the enemies and some of my warriors would die even though they are on the opposite side of the screen (like 20 or so squares away). When this happens some of the Healers get stuck repeatedly healing nothing. I've tried recording it but I can't seem to intentionally reproduce it :/ Windows 10 x64 Steam Latest stable recommended version
priority
meteor strike occurs randomly when zapping blue crystals can kill weak party members i ve noticed a few times now that long after enemies have moved out of range of my warriors i ll detonate a blue crystal vein next to the enemies and some of my warriors would die even though they are on the opposite side of the screen like or so squares away when this happens some of the healers get stuck repeatedly healing nothing i ve tried recording it but i can t seem to intentionally reproduce it windows steam latest stable recommended version
1
241,729
7,833,793,498
IssuesEvent
2018-06-16 03:29:10
ThreadNet/ThreadNet
https://api.github.com/repos/ThreadNet/ThreadNet
closed
New layout function for graphs
Due Date: May-1-2018 Highest priority Interactivity
The current layout functions are not very informative. Here is one idea. Along the x-axis, order the nodes by relative time within the threads (so the nodes read from left to right by time). Along the y-axis, order the nodes by frequency, so the most frequent nodes occur at the top, and less frequent nodes appear below them. In this layout, if you read along the top, you will see the most frequent notes in temporal/sequential order.
1.0
New layout function for graphs - The current layout functions are not very informative. Here is one idea. Along the x-axis, order the nodes by relative time within the threads (so the nodes read from left to right by time). Along the y-axis, order the nodes by frequency, so the most frequent nodes occur at the top, and less frequent nodes appear below them. In this layout, if you read along the top, you will see the most frequent notes in temporal/sequential order.
priority
new layout function for graphs the current layout functions are not very informative here is one idea along the x axis order the nodes by relative time within the threads so the nodes read from left to right by time along the y axis order the nodes by frequency so the most frequent nodes occur at the top and less frequent nodes appear below them in this layout if you read along the top you will see the most frequent notes in temporal sequential order
1
547,953
16,050,143,835
IssuesEvent
2021-04-22 18:05:12
unfoldingWord/gateway-edit
https://api.github.com/repos/unfoldingWord/gateway-edit
opened
SPIKE []: Determine what's not working on unfoldingWord org
Priority/High
What translation helps/resources aren't working in unfoldingWord org and why? - [ ] TWL - [ ] TW - [ ] TN (legacy) - [ ] TA - [ ] TQ - [ ] ULT - [ ] UST
1.0
SPIKE []: Determine what's not working on unfoldingWord org - What translation helps/resources aren't working in unfoldingWord org and why? - [ ] TWL - [ ] TW - [ ] TN (legacy) - [ ] TA - [ ] TQ - [ ] ULT - [ ] UST
priority
spike determine what s not working on unfoldingword org what translation helps resources aren t working in unfoldingword org and why twl tw tn legacy ta tq ult ust
1
27,504
2,693,453,983
IssuesEvent
2015-04-01 14:50:25
Connexions/webview
https://api.github.com/repos/Connexions/webview
closed
Editor - Exercise plugin converting letters to numbers
bug High Priority
I derived a copy of http://dev.cnx.org/contents/afe4332a-c97f-4fc4-be27-4e4d384a32d8@7.19:3/Introduction_to_Sociology. The exercises display correctly in the Exercise editor, but the bullet points are converted from lowercase leters to numbers Original Text ![screenshot from 2015-03-31 13 11 56](https://cloud.githubusercontent.com/assets/1394695/6925819/c34ecf7c-d7a7-11e4-9e4a-36a263b4fc7e.png) In Editor ![screenshot from 2015-03-31 13 12 21](https://cloud.githubusercontent.com/assets/1394695/6925824/cc7221a8-d7a7-11e4-89fb-d05f8267444e.png)
1.0
Editor - Exercise plugin converting letters to numbers - I derived a copy of http://dev.cnx.org/contents/afe4332a-c97f-4fc4-be27-4e4d384a32d8@7.19:3/Introduction_to_Sociology. The exercises display correctly in the Exercise editor, but the bullet points are converted from lowercase leters to numbers Original Text ![screenshot from 2015-03-31 13 11 56](https://cloud.githubusercontent.com/assets/1394695/6925819/c34ecf7c-d7a7-11e4-9e4a-36a263b4fc7e.png) In Editor ![screenshot from 2015-03-31 13 12 21](https://cloud.githubusercontent.com/assets/1394695/6925824/cc7221a8-d7a7-11e4-89fb-d05f8267444e.png)
priority
editor exercise plugin converting letters to numbers i derived a copy of the exercises display correctly in the exercise editor but the bullet points are converted from lowercase leters to numbers original text in editor
1
128,202
5,050,789,167
IssuesEvent
2016-12-20 19:49:23
fgpv-vpgf/fgpv-vpgf
https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf
closed
Layer Symbology stack can be opened from Layer Menu
addition: change request experience: ux priority: high
- [x] Add menu option (Show legend) to the layers drop down menu (this will add another way of opening the symbology stack) - [x] Fan out icons when hovering over a legend record **Usability Issue:** V4. Unable to find and open the symbology (Type: Design) Inside the legend most testers were not aware that the symbology icons (including animations) were clickable. After assistance some testers were able to open it and asked that an expandable arrow be used to indicate that this part is expandable.
1.0
Layer Symbology stack can be opened from Layer Menu - - [x] Add menu option (Show legend) to the layers drop down menu (this will add another way of opening the symbology stack) - [x] Fan out icons when hovering over a legend record **Usability Issue:** V4. Unable to find and open the symbology (Type: Design) Inside the legend most testers were not aware that the symbology icons (including animations) were clickable. After assistance some testers were able to open it and asked that an expandable arrow be used to indicate that this part is expandable.
priority
layer symbology stack can be opened from layer menu add menu option show legend to the layers drop down menu this will add another way of opening the symbology stack fan out icons when hovering over a legend record usability issue unable to find and open the symbology type design inside the legend most testers were not aware that the symbology icons including animations were clickable after assistance some testers were able to open it and asked that an expandable arrow be used to indicate that this part is expandable
1
193,152
6,881,998,788
IssuesEvent
2017-11-21 01:15:38
TaoK/PoorMansTSqlFormatter
https://api.github.com/repos/TaoK/PoorMansTSqlFormatter
closed
[noformat][/noformat] tags are broken
priority-high resolved
@StrayChild01 notes that [noformat][/noformat] comment region markers don't work in VS Code, and it turns out they don't work at all in any of the environments, presumably since the big JS-compatibility refactor in 1.6.*!
1.0
[noformat][/noformat] tags are broken - @StrayChild01 notes that [noformat][/noformat] comment region markers don't work in VS Code, and it turns out they don't work at all in any of the environments, presumably since the big JS-compatibility refactor in 1.6.*!
priority
tags are broken notes that comment region markers don t work in vs code and it turns out they don t work at all in any of the environments presumably since the big js compatibility refactor in
1
274,532
8,562,796,078
IssuesEvent
2018-11-09 11:52:23
fedora-infra/bodhi
https://api.github.com/repos/fedora-infra/bodhi
closed
When Bodhi is run in Python 3, production.ini examples do not work due to % character interpolation
Crash High priority
The example ```production.ini``` contains some examples that use ```%s``` to be replaced at run time. This works in Python 2, but in Python 3 you get a traceback: ``` [root@bodhi-backend01 ~][STG]# pshell-3 /etc/bodhi/production.ini Traceback (most recent call last): File "/usr/bin/pshell-3", line 11, in <module> load_entry_point('pyramid==1.9.2', 'console_scripts', 'pshell')() File "/usr/lib/python3.7/site-packages/pyramid/scripts/pshell.py", line 19, in main return command.run() File "/usr/lib/python3.7/site-packages/pyramid/scripts/pshell.py", line 124, in run env = self.bootstrap(config_uri, options=config_vars) File "/usr/lib/python3.7/site-packages/pyramid/paster.py", line 107, in bootstrap app = get_app(config_uri, options=options) File "/usr/lib/python3.7/site-packages/pyramid/paster.py", line 28, in get_app return loader.get_wsgi_app(name, options) File "/usr/lib/python3.7/site-packages/plaster_pastedeploy/__init__.py", line 131, in get_wsgi_app global_conf=defaults) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj global_conf=global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 430, in get_context local_conf[option] = self.parser.get(section, option) File "/usr/lib64/python3.7/configparser.py", line 799, in get d) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 95, in before_get value, defaults) File "/usr/lib64/python3.7/configparser.py", line 394, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/lib64/python3.7/configparser.py", line 444, in _interpolate_some "found: %r" % (rest,)) configparser.InterpolationSyntaxError: Error in file /etc/bodhi/production.ini: '%' must be followed by '%' or '(', found: '%{uid}' ``` The fix is simple - we should adjust the settings to use ```%%``` instead of ```%```, and we should document this at the top of the file.
1.0
When Bodhi is run in Python 3, production.ini examples do not work due to % character interpolation - The example ```production.ini``` contains some examples that use ```%s``` to be replaced at run time. This works in Python 2, but in Python 3 you get a traceback: ``` [root@bodhi-backend01 ~][STG]# pshell-3 /etc/bodhi/production.ini Traceback (most recent call last): File "/usr/bin/pshell-3", line 11, in <module> load_entry_point('pyramid==1.9.2', 'console_scripts', 'pshell')() File "/usr/lib/python3.7/site-packages/pyramid/scripts/pshell.py", line 19, in main return command.run() File "/usr/lib/python3.7/site-packages/pyramid/scripts/pshell.py", line 124, in run env = self.bootstrap(config_uri, options=config_vars) File "/usr/lib/python3.7/site-packages/pyramid/paster.py", line 107, in bootstrap app = get_app(config_uri, options=options) File "/usr/lib/python3.7/site-packages/pyramid/paster.py", line 28, in get_app return loader.get_wsgi_app(name, options) File "/usr/lib/python3.7/site-packages/plaster_pastedeploy/__init__.py", line 131, in get_wsgi_app global_conf=defaults) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 271, in loadobj global_conf=global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 430, in get_context local_conf[option] = self.parser.get(section, option) File "/usr/lib64/python3.7/configparser.py", line 799, in get d) File "/usr/lib/python3.7/site-packages/paste/deploy/loadwsgi.py", line 95, in before_get value, defaults) File "/usr/lib64/python3.7/configparser.py", line 394, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/lib64/python3.7/configparser.py", line 444, in _interpolate_some "found: %r" % (rest,)) configparser.InterpolationSyntaxError: Error in file /etc/bodhi/production.ini: '%' must be followed by '%' or '(', found: '%{uid}' ``` The fix is simple - we should adjust the settings to use ```%%``` instead of ```%```, and we should document this at the top of the file.
priority
when bodhi is run in python production ini examples do not work due to character interpolation the example production ini contains some examples that use s to be replaced at run time this works in python but in python you get a traceback pshell etc bodhi production ini traceback most recent call last file usr bin pshell line in load entry point pyramid console scripts pshell file usr lib site packages pyramid scripts pshell py line in main return command run file usr lib site packages pyramid scripts pshell py line in run env self bootstrap config uri options config vars file usr lib site packages pyramid paster py line in bootstrap app get app config uri options options file usr lib site packages pyramid paster py line in get app return loader get wsgi app name options file usr lib site packages plaster pastedeploy init py line in get wsgi app global conf defaults file usr lib site packages paste deploy loadwsgi py line in loadapp return loadobj app uri name name kw file usr lib site packages paste deploy loadwsgi py line in loadobj global conf global conf file usr lib site packages paste deploy loadwsgi py line in loadcontext global conf global conf file usr lib site packages paste deploy loadwsgi py line in loadconfig return loader get context object type name global conf file usr lib site packages paste deploy loadwsgi py line in get context local conf self parser get section option file usr configparser py line in get d file usr lib site packages paste deploy loadwsgi py line in before get value defaults file usr configparser py line in before get self interpolate some parser option l value section defaults file usr configparser py line in interpolate some found r rest configparser interpolationsyntaxerror error in file etc bodhi production ini must be followed by or found uid the fix is simple we should adjust the settings to use instead of and we should document this at the top of the file
1
281,076
8,690,474,965
IssuesEvent
2018-12-03 21:41:22
BendroCorp/bendrocorp-app
https://api.github.com/repos/BendroCorp/bendrocorp-app
opened
Implement training system
api support required effort: high feature priority:normal
Implement a system for training/re-training employees/members. The training system will work with #72 to add badges to a members profile. To-Do: - [ ] Training Course (Container) (title, description, archived, badge) - [ ] Training Item - [ ] Training Item Completion - [ ] Training Item Quiz - [ ] Training Item Quiz Answer - [ ] Training Quiz Completion - [ ] Training Course Completion?
1.0
Implement training system - Implement a system for training/re-training employees/members. The training system will work with #72 to add badges to a members profile. To-Do: - [ ] Training Course (Container) (title, description, archived, badge) - [ ] Training Item - [ ] Training Item Completion - [ ] Training Item Quiz - [ ] Training Item Quiz Answer - [ ] Training Quiz Completion - [ ] Training Course Completion?
priority
implement training system implement a system for training re training employees members the training system will work with to add badges to a members profile to do training course container title description archived badge training item training item completion training item quiz training item quiz answer training quiz completion training course completion
1
194,519
6,895,587,084
IssuesEvent
2017-11-23 14:24:50
DOAJ/doaj
https://api.github.com/repos/DOAJ/doaj
opened
DNS
high priority
Hello CL We have created that other site bestpracticeguide.doaj.org and we have mapped this subdomain. Could you do the same as you did for the blog? Here you have the instructions https://en.support.wordpress.com/domains/map-subdomain/ I have also submitted a ticket for this. It is quite of urgent as Lars is expecting this to be done and up by next week. Can you help me, please? Thanks
1.0
DNS - Hello CL We have created that other site bestpracticeguide.doaj.org and we have mapped this subdomain. Could you do the same as you did for the blog? Here you have the instructions https://en.support.wordpress.com/domains/map-subdomain/ I have also submitted a ticket for this. It is quite of urgent as Lars is expecting this to be done and up by next week. Can you help me, please? Thanks
priority
dns hello cl we have created that other site bestpracticeguide doaj org and we have mapped this subdomain could you do the same as you did for the blog here you have the instructions i have also submitted a ticket for this it is quite of urgent as lars is expecting this to be done and up by next week can you help me please thanks
1
251,806
8,027,792,257
IssuesEvent
2018-07-27 10:19:32
kowala-tech/kcoin
https://api.github.com/repos/kowala-tech/kcoin
opened
Myetherwallet app updates for Ledger
High priority andromeda-launch
- [ ] ledger send not working (console issue; `generate transaction` pane) - [ ] ledger guide (landing page) is wrong - [ ] error message when ledger not connected (currently a bunch of JSON) - [ ] ethplorer.io links (visible on 'choose your address` page) - [ ] kowledgebase link in footer - [ ] Help link in main menu
1.0
Myetherwallet app updates for Ledger - - [ ] ledger send not working (console issue; `generate transaction` pane) - [ ] ledger guide (landing page) is wrong - [ ] error message when ledger not connected (currently a bunch of JSON) - [ ] ethplorer.io links (visible on 'choose your address` page) - [ ] kowledgebase link in footer - [ ] Help link in main menu
priority
myetherwallet app updates for ledger ledger send not working console issue generate transaction pane ledger guide landing page is wrong error message when ledger not connected currently a bunch of json ethplorer io links visible on choose your address page kowledgebase link in footer help link in main menu
1
105,414
4,235,515,312
IssuesEvent
2016-07-05 15:25:50
tulibraries/ojs
https://api.github.com/repos/tulibraries/ojs
opened
Automated DOIs via CrossRef
Priority High
Investigate current practice of DOI download, prior work towards DOI automation, and confirm successful operation of DOI functionality in upgraded OJS.
1.0
Automated DOIs via CrossRef - Investigate current practice of DOI download, prior work towards DOI automation, and confirm successful operation of DOI functionality in upgraded OJS.
priority
automated dois via crossref investigate current practice of doi download prior work towards doi automation and confirm successful operation of doi functionality in upgraded ojs
1
616,521
19,304,796,431
IssuesEvent
2021-12-13 10:21:27
transport-nantes/tn_web
https://api.github.com/repos/transport-nantes/tn_web
closed
After making a TBv2 item, need more options
1-priority high
* create new item * get URL as user * view as editor
1.0
After making a TBv2 item, need more options - * create new item * get URL as user * view as editor
priority
after making a item need more options create new item get url as user view as editor
1
65,675
3,237,371,101
IssuesEvent
2015-10-14 11:30:42
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
closed
plotMD issue
Component: GUI Group: Vates Priority: High
This issue was originally [TRAC 11713](http://trac.mantidproject.org/mantid/ticket/11713) There is no option to change the normalisation once you have made a plot using plotMD. We should have a menu option on open plots of this type to switch between, none, number of events and volume normalisation. - - - - Keywords: vates
1.0
plotMD issue - This issue was originally [TRAC 11713](http://trac.mantidproject.org/mantid/ticket/11713) There is no option to change the normalisation once you have made a plot using plotMD. We should have a menu option on open plots of this type to switch between, none, number of events and volume normalisation. - - - - Keywords: vates
priority
plotmd issue this issue was originally there is no option to change the normalisation once you have made a plot using plotmd we should have a menu option on open plots of this type to switch between none number of events and volume normalisation keywords vates
1
810,397
30,240,663,806
IssuesEvent
2023-07-06 13:19:48
calcom/cal.com
https://api.github.com/repos/calcom/cal.com
closed
[CAL-1704] App side menu
✨ feature High priority 🏰 Organizations
Implement the side menu to support organization switching along with other changes to the layout and items * [x] #8957 <sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1704](https://linear.app/calcom/issue/CAL-1704/app-side-menu)</sub>
1.0
[CAL-1704] App side menu - Implement the side menu to support organization switching along with other changes to the layout and items * [x] #8957 <sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1704](https://linear.app/calcom/issue/CAL-1704/app-side-menu)</sub>
priority
app side menu implement the side menu to support organization switching along with other changes to the layout and items from
1
720,064
24,777,086,271
IssuesEvent
2022-10-23 21:24:23
radical-cybertools/radical.pilot
https://api.github.com/repos/radical-cybertools/radical.pilot
closed
copy provided TD when creating a worker description in raptor master
topic:execution type:enhancement priority:high comp:agent:executor comp:raptor
Otherwise config gets changed... Note: Fix raptor examples, not RP API layer...
1.0
copy provided TD when creating a worker description in raptor master - Otherwise config gets changed... Note: Fix raptor examples, not RP API layer...
priority
copy provided td when creating a worker description in raptor master otherwise config gets changed note fix raptor examples not rp api layer
1
195,000
6,901,541,311
IssuesEvent
2017-11-25 08:56:53
goby-lang/goby
https://api.github.com/repos/goby-lang/goby
closed
Bug: unstable argument with `String#to_*` and `Range#each`
bug in progress Parser Priority High
The following causes invalid ArgumentError: ```ruby def foo(k) b = "4".to_f (1..k).each do |x| puts b + x end end foo(10) #=> ArgumentError: Expect 0 argument. got=1 #=> from /Users/hachi8833/deve/goby_sandbox/47sandbox.gb:2 #=> from /Users/hachi8833/deve/goby_sandbox/47sandbox.gb:15 ``` Same errors are occured if you change `"4".to_f` to `"4".to_d` or `"4".to_i`. But it is strange that the following works: - if you add any lines or comments around `b = "4".to_f ` ```ruby def foo(k) b = "4".to_f # dummy comment (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` ```ruby def foo(k) b = "4".to_f puts k (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` - if you change `"4".to_f` to `4` ```ruby def foo(k) b = 4 (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` ## Assumption The following condition might be relevant: 1. the method has one or more arguments 2. the first line of the methods contains `String#to_*` 3. the second line contains a Range object with the argument and `#each` like: - `(k..10).each` causes the error - `(10..k).each` causes the error - `puts (k..10)` or `puts (10..k)` do not cause the error
1.0
Bug: unstable argument with `String#to_*` and `Range#each` - The following causes invalid ArgumentError: ```ruby def foo(k) b = "4".to_f (1..k).each do |x| puts b + x end end foo(10) #=> ArgumentError: Expect 0 argument. got=1 #=> from /Users/hachi8833/deve/goby_sandbox/47sandbox.gb:2 #=> from /Users/hachi8833/deve/goby_sandbox/47sandbox.gb:15 ``` Same errors are occured if you change `"4".to_f` to `"4".to_d` or `"4".to_i`. But it is strange that the following works: - if you add any lines or comments around `b = "4".to_f ` ```ruby def foo(k) b = "4".to_f # dummy comment (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` ```ruby def foo(k) b = "4".to_f puts k (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` - if you change `"4".to_f` to `4` ```ruby def foo(k) b = 4 (1..k).each do |x| puts b + x end end foo(10) #=> (works) ``` ## Assumption The following condition might be relevant: 1. the method has one or more arguments 2. the first line of the methods contains `String#to_*` 3. the second line contains a Range object with the argument and `#each` like: - `(k..10).each` causes the error - `(10..k).each` causes the error - `puts (k..10)` or `puts (10..k)` do not cause the error
priority
bug unstable argument with string to and range each the following causes invalid argumenterror ruby def foo k b to f k each do x puts b x end end foo argumenterror expect argument got from users deve goby sandbox gb from users deve goby sandbox gb same errors are occured if you change to f to to d or to i but it is strange that the following works if you add any lines or comments around b to f ruby def foo k b to f dummy comment k each do x puts b x end end foo works ruby def foo k b to f puts k k each do x puts b x end end foo works if you change to f to ruby def foo k b k each do x puts b x end end foo works assumption the following condition might be relevant the method has one or more arguments the first line of the methods contains string to the second line contains a range object with the argument and each like k each causes the error k each causes the error puts k or puts k do not cause the error
1
204,327
7,086,957,812
IssuesEvent
2018-01-11 16:19:16
andrewvt/HPTS
https://api.github.com/repos/andrewvt/HPTS
closed
Policy Form UI Enhancements
Awaiting Review High Priority enhancement
1.Can you make the notes field bigger? Reduce the scrolling down? 2. Have the option of clicking on multiple subject areas at one time. Reducing the number of clicks.
1.0
Policy Form UI Enhancements - 1.Can you make the notes field bigger? Reduce the scrolling down? 2. Have the option of clicking on multiple subject areas at one time. Reducing the number of clicks.
priority
policy form ui enhancements can you make the notes field bigger reduce the scrolling down have the option of clicking on multiple subject areas at one time reducing the number of clicks
1
531,141
15,441,733,635
IssuesEvent
2021-03-08 06:29:52
azjezz/psl
https://api.github.com/repos/azjezz/psl
closed
Filesystem component
Priority: High Status: Available Type: Enhancement
Introduce a filesystem component containing a replacement for the following PHPs builtin filesystem functions: - basename ( `Psl\Filesystem\basename` ) - chgrp ( `Psl\Filesystem\chgrp` ) - chmod ( `Psl\Filesystem\chmod` ) - chown ( `Psl\Filesystem\chown` ) - copy ( `Psl\Filesystem\copy` ) - dirname ( `Psl\Filesystem\dirname` ) - file_​exists ( `Psl\Filesystem\file_exists` ) - glob ( `Psl\Filesystem\glob` ) - is_​dir ( `Psl\Filesystem\is_dir` ) - is_​executable ( `Psl\Filesystem\is_executable` ) - is_​file ( `Psl\Filesystem\is_file` ) - is_​link ( `Psl\Filesystem\is_link` ) - is_​readable ( `Psl\Filesystem\is_readable` ) - is_​writable ( `Psl\Filesystem\is_writable` ) - lchgrp ( `Psl\Filesystem\link_chgrp` ) - lchown ( `Psl\Filesystem\link_chown` ) - link ( `Psl\Filesystem\link` ) - mkdir ( `Psl\Filesystem\mkdir` ) - realpath ( `Psl\Filesystem\realpath` ) - rename ( `Psl\Filesystem\rename` ) - rmdir ( `Psl\Filesystem\rmdir` ) - symlink ( `Psl\Filesystem\symlink` ) - touch ( `Psl\Filesystem\touch` ) - unlink ( `Psl\Filesystem\unlink` )
1.0
Filesystem component - Introduce a filesystem component containing a replacement for the following PHPs builtin filesystem functions: - basename ( `Psl\Filesystem\basename` ) - chgrp ( `Psl\Filesystem\chgrp` ) - chmod ( `Psl\Filesystem\chmod` ) - chown ( `Psl\Filesystem\chown` ) - copy ( `Psl\Filesystem\copy` ) - dirname ( `Psl\Filesystem\dirname` ) - file_​exists ( `Psl\Filesystem\file_exists` ) - glob ( `Psl\Filesystem\glob` ) - is_​dir ( `Psl\Filesystem\is_dir` ) - is_​executable ( `Psl\Filesystem\is_executable` ) - is_​file ( `Psl\Filesystem\is_file` ) - is_​link ( `Psl\Filesystem\is_link` ) - is_​readable ( `Psl\Filesystem\is_readable` ) - is_​writable ( `Psl\Filesystem\is_writable` ) - lchgrp ( `Psl\Filesystem\link_chgrp` ) - lchown ( `Psl\Filesystem\link_chown` ) - link ( `Psl\Filesystem\link` ) - mkdir ( `Psl\Filesystem\mkdir` ) - realpath ( `Psl\Filesystem\realpath` ) - rename ( `Psl\Filesystem\rename` ) - rmdir ( `Psl\Filesystem\rmdir` ) - symlink ( `Psl\Filesystem\symlink` ) - touch ( `Psl\Filesystem\touch` ) - unlink ( `Psl\Filesystem\unlink` )
priority
filesystem component introduce a filesystem component containing a replacement for the following phps builtin filesystem functions basename psl filesystem basename chgrp psl filesystem chgrp chmod psl filesystem chmod chown psl filesystem chown copy psl filesystem copy dirname psl filesystem dirname file ​exists psl filesystem file exists glob psl filesystem glob is ​dir psl filesystem is dir is ​executable psl filesystem is executable is ​file psl filesystem is file is ​link psl filesystem is link is ​readable psl filesystem is readable is ​writable psl filesystem is writable lchgrp psl filesystem link chgrp lchown psl filesystem link chown link psl filesystem link mkdir psl filesystem mkdir realpath psl filesystem realpath rename psl filesystem rename rmdir psl filesystem rmdir symlink psl filesystem symlink touch psl filesystem touch unlink psl filesystem unlink
1
228,720
7,567,037,821
IssuesEvent
2018-04-22 04:34:20
compodoc/compodoc
https://api.github.com/repos/compodoc/compodoc
closed
[BUG] Error: Could not find the node's symbol.
1. Type: Bug Context : Routing Priority: High Status: Accepted Time: ~1 hour
<!-- > Please follow the issue template below for bug reports and queries. > For issue, start the label of the title with [BUG] > For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant. --> ##### **Overview of the issue** After upgrading to compodoc 1.1.1 from 1.0.9 I got the `reason: Error: Could not find the node's symbol.` error ##### **Operating System, Node.js, npm, compodoc version(s)** compodoc: 1.1.1 node: 9.8.0 mac: High Sierra ##### **Angular configuration, a `package.json` file in the root folder** Angular: 5.2.9 ##### **Compodoc installed globally or locally ?** locally installed compodoc ##### **Motivation for or Use Case** Cannot generate docs anymore ##### **Reproduce the error** npm script: `./node_modules/.bin/compodoc -p src/tsconfig.app.json -d ./docs -t --theme stripe --disableCoverage --hideGenerator -n \"My Documentation\"` ##### **Related issues** 1.0.9 compodoc version was working well ##### **Suggest a Fix** <!-- if you can't fix the bug yourself, perhaps you can point to what might be causing the problem (line of code or commit) --> <!-- Love compodoc? Please consider supporting our collective: 👉 https://opencollective.com/compodoc/donate -->
1.0
[BUG] Error: Could not find the node's symbol. - <!-- > Please follow the issue template below for bug reports and queries. > For issue, start the label of the title with [BUG] > For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant. --> ##### **Overview of the issue** After upgrading to compodoc 1.1.1 from 1.0.9 I got the `reason: Error: Could not find the node's symbol.` error ##### **Operating System, Node.js, npm, compodoc version(s)** compodoc: 1.1.1 node: 9.8.0 mac: High Sierra ##### **Angular configuration, a `package.json` file in the root folder** Angular: 5.2.9 ##### **Compodoc installed globally or locally ?** locally installed compodoc ##### **Motivation for or Use Case** Cannot generate docs anymore ##### **Reproduce the error** npm script: `./node_modules/.bin/compodoc -p src/tsconfig.app.json -d ./docs -t --theme stripe --disableCoverage --hideGenerator -n \"My Documentation\"` ##### **Related issues** 1.0.9 compodoc version was working well ##### **Suggest a Fix** <!-- if you can't fix the bug yourself, perhaps you can point to what might be causing the problem (line of code or commit) --> <!-- Love compodoc? Please consider supporting our collective: 👉 https://opencollective.com/compodoc/donate -->
priority
error could not find the node s symbol please follow the issue template below for bug reports and queries for issue start the label of the title with for feature requests start the label of the title with and explain your use case and ideas clearly below you can remove sections which are not relevant overview of the issue after upgrading to compodoc from i got the reason error could not find the node s symbol error operating system node js npm compodoc version s compodoc node mac high sierra angular configuration a package json file in the root folder angular compodoc installed globally or locally locally installed compodoc motivation for or use case cannot generate docs anymore reproduce the error npm script node modules bin compodoc p src tsconfig app json d docs t theme stripe disablecoverage hidegenerator n my documentation related issues compodoc version was working well suggest a fix if you can t fix the bug yourself perhaps you can point to what might be causing the problem line of code or commit love compodoc please consider supporting our collective 👉
1
521,901
15,144,841,823
IssuesEvent
2021-02-11 02:30:00
SACOOP-PE/SIA-Analitica-PE
https://api.github.com/repos/SACOOP-PE/SIA-Analitica-PE
opened
improve error predicates (urgent)
high priority
[correcciones.docx](https://github.com/SACOOP-PE/SIA-Analitica-PE/files/5962901/correcciones.docx) Remito las correcciones, please tener cuidado que lo implementado sea lo mismo que el documento que está en el teams. Dicho documento lo hacemos revisar por AL y por Oscar Salhuana lunes 15. @edwin-roque-0723 please levanta este issue con @JumpNShootMan Prioridad 1
1.0
improve error predicates (urgent) - [correcciones.docx](https://github.com/SACOOP-PE/SIA-Analitica-PE/files/5962901/correcciones.docx) Remito las correcciones, please tener cuidado que lo implementado sea lo mismo que el documento que está en el teams. Dicho documento lo hacemos revisar por AL y por Oscar Salhuana lunes 15. @edwin-roque-0723 please levanta este issue con @JumpNShootMan Prioridad 1
priority
improve error predicates urgent remito las correcciones please tener cuidado que lo implementado sea lo mismo que el documento que está en el teams dicho documento lo hacemos revisar por al y por oscar salhuana lunes edwin roque please levanta este issue con jumpnshootman prioridad
1
518,822
15,035,443,306
IssuesEvent
2021-02-02 14:09:09
netdata/netdata
https://api.github.com/repos/netdata/netdata
closed
Latest update failure on Docker
area/packaging bug needs triage priority/high
<!-- When creating a bug report please: - Verify first that your issue is not already reported on GitHub. - Test if the latest release and master branch are affected too. --> ##### Bug report summary <!-- Provide a clear and concise description of the bug you're experiencing. --> On startup after new update, the container crashes with this log: ``` Error loading shared library libprotobuf.so.24: No such file or directory (needed by /usr/sbin/netdata) Error relocating /usr/sbin/netdata: __fstat_time64: symbol not found Error relocating /usr/sbin/netdata: __getrusage_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13OnShutdownRunEPFvPKvES3_: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImpl28AllocateAlignedAndAddCleanupEjPFvPvE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf5Arena19CallDestructorHooksEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17AssignDescriptorsEPKNS1_15DescriptorTableEb: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal18EpsCopyInputStream12DoneFallbackEPKci: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf11StringPiece18LogFatalSizeTooBigEjPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf5Arena21AllocateAlignedNoHookEj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10VerifyUTF8ENS0_11StringPieceEPKc: symbol not found Error relocating /usr/sbin/netdata: __gmtime64_r: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal16ReadSizeFallbackEPKcj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal12ExtensionSet17RegisterExtensionEPKNS0_11MessageLiteEihbb: symbol not found Error relocating /usr/sbin/netdata: __nanosleep_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImplD1Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10WireFormat37InternalSerializeUnknownFieldsToArrayERKNS0_15UnknownFieldSetEPhPNS0_2io19EpsCopyOutputStreamE: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf11MessageLite17SerializeToStringEPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal20RepeatedPtrFieldBase7ReserveEi: symbol not found Error relocating /usr/sbin/netdata: __mktime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14WireFormatLite16VerifyUtf8StringEPKciNS2_9OperationES4_: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infoj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal15ReadTagFallbackEPKcj: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf8internal9ArenaImpl14SpaceAllocatedEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13ReflectionOps5MergeERKNS0_7MessageEPS3_: symbol not found Error relocating /usr/sbin/netdata: __select_time64: symbol not found Error relocating /usr/sbin/netdata: __pselect_time64: symbol not found Error relocating /usr/sbin/netdata: __utimes_time64: symbol not found Error relocating /usr/sbin/netdata: __gettimeofday_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal11InitSCCImplEPNS1_11SCCInfoBaseE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13VerifyVersionEiiPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet9MergeFromERKS1_: symbol not found Error relocating /usr/sbin/netdata: __localtime64_r: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal11LogFinisheraSERNS1_10LogMessageE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf2io19EpsCopyOutputStream30WriteStringMaybeAliasedOutlineEjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPh: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet13ClearFallbackEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17UnknownFieldParseEyPNS0_15UnknownFieldSetEPKcPNS1_12ParseContextE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf2io19EpsCopyOutputStream19EnsureSpaceFallbackEPh: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessageC1ENS0_8LogLevelEPKci: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf23ShutdownProtobufLibraryEv: symbol not found Error relocating /usr/sbin/netdata: __stat_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal20RepeatedPtrFieldBase14InternalExtendEi: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessagelsEPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14AddDescriptorsEPKNS1_15DescriptorTableE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessageD1Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImpl4InitEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal24ComputeUnknownFieldsSizeERKNS1_16InternalMetadataEjPNS1_10CachedSizeE: symbol not found Error relocating /usr/sbin/netdata: __recvmmsg_time64: symbol not found Error relocating /usr/sbin/netdata: __clock_gettime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal24InlineGreedyStringParserEPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKcPNS1_12ParseContextE: symbol not found Error relocating /usr/sbin/netdata: __lstat_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17VarintParseSlow64EPKcj: symbol not found Error relocating /usr/sbin/netdata: __localtime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf12FieldOptions16default_instanceEv: symbol not found Error relocating /usr/sbin/netdata: __time64: symbol not found Error relocating /usr/sbin/netdata: descriptor_table_google_2fprotobuf_2fdescriptor_2eproto: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14DestroyMessageEPKv: symbol not found Error relocating /usr/sbin/netdata: _ZTVN6google8protobuf11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet16default_instanceEv: symbol not found Error relocating /usr/sbin/netdata: _ZTVN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: __stat_time64: symbol not found Error relocating /usr/sbin/netdata: __fstat_time64: symbol not found Error relocating /usr/sbin/netdata: __lstat_time64: symbol not found ``` ##### OS / Environment <!-- Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.) as possible to allow us reproduce this bug faster. To get this information, execute the following commands based on your operating system: - uname -a; grep -Hv "^#" /etc/*release # Linux - uname -a; uname -K # BSD - uname -a; sw_vers # macOS Place the output from the command in the code section below. --> ``` $ uname -a; grep -Hv "^#" /etc/*release Linux MYHOSTNAME 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l GNU/Linux /etc/os-release:PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" /etc/os-release:NAME="Raspbian GNU/Linux" /etc/os-release:VERSION_ID="10" /etc/os-release:VERSION="10 (buster)" /etc/os-release:VERSION_CODENAME=buster /etc/os-release:ID=raspbian /etc/os-release:ID_LIKE=debian /etc/os-release:HOME_URL="http://www.raspbian.org/" /etc/os-release:SUPPORT_URL="http://www.raspbian.org/RaspbianForums" /etc/os-release:BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" ``` ##### Netdata version <!-- Provide output of `netdata -V`. If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V --> The closest I can give is the image digest since the container fails to properly start: ``` $ docker pull netdata/netdata Using default tag: latest latest: Pulling from netdata/netdata Digest: sha256:e8d5fb55835a62580ed5aabeb518aebbe620030190845572de7d705e71286f63 Status: Image is up to date for netdata/netdata:latest docker.io/netdata/netdata:latest ``` ##### Component Name <!-- Let us know which component is affected by the bug. Our code is structured according to its component, so the component name is the same as the top level directory of the repository. For example, a bug in the dashboard would be under the web component. --> The entire Docker container. ##### Steps To Reproduce <!-- Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario. If you'd like to attach larger files, use gist.github.com and paste in links. --> The bug appeared before I updated my docker. Even after `apt update && apt full-upgrade`, the bug is still there. docker-compose.yml: ``` version: "3.8" services: netdata: image: netdata/netdata container_name: netdata restart: "no" security_opt: - apparmor:unconfined cap_add: - SYS_PTRACE environment: - TZ=${TZ} volumes: - /etc/localtime:/etc/localtime:ro - netdataconfig:/etc/netdata - netdatalib:/var/lib/netdata - netdatacache:/var/cache/netdata - /etc/os-release:/host/etc/os-release:ro - /etc/passwd:/host/etc/passwd:ro - /etc/group:/host/etc/group:ro - /proc:/host/proc:ro - /sys:/host/sys:ro - /var/run/docker.sock:/var/run/docker.sock:ro ``` ##### Expected behavior <!-- Provide a clear and concise description of what you expected to happen. --> As before the last update, I expected the container to start and have very well designed graphs :D Thanks for the container guys, I love it despite this bug :)
1.0
Latest update failure on Docker - <!-- When creating a bug report please: - Verify first that your issue is not already reported on GitHub. - Test if the latest release and master branch are affected too. --> ##### Bug report summary <!-- Provide a clear and concise description of the bug you're experiencing. --> On startup after new update, the container crashes with this log: ``` Error loading shared library libprotobuf.so.24: No such file or directory (needed by /usr/sbin/netdata) Error relocating /usr/sbin/netdata: __fstat_time64: symbol not found Error relocating /usr/sbin/netdata: __getrusage_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13OnShutdownRunEPFvPKvES3_: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImpl28AllocateAlignedAndAddCleanupEjPFvPvE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf5Arena19CallDestructorHooksEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17AssignDescriptorsEPKNS1_15DescriptorTableEb: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal18EpsCopyInputStream12DoneFallbackEPKci: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf11StringPiece18LogFatalSizeTooBigEjPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf5Arena21AllocateAlignedNoHookEj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10VerifyUTF8ENS0_11StringPieceEPKc: symbol not found Error relocating /usr/sbin/netdata: __gmtime64_r: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal16ReadSizeFallbackEPKcj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal12ExtensionSet17RegisterExtensionEPKNS0_11MessageLiteEihbb: symbol not found Error relocating /usr/sbin/netdata: __nanosleep_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImplD1Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10WireFormat37InternalSerializeUnknownFieldsToArrayERKNS0_15UnknownFieldSetEPhPNS0_2io19EpsCopyOutputStreamE: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf11MessageLite17SerializeToStringEPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal20RepeatedPtrFieldBase7ReserveEi: symbol not found Error relocating /usr/sbin/netdata: __mktime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14WireFormatLite16VerifyUtf8StringEPKciNS2_9OperationES4_: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infoj: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal15ReadTagFallbackEPKcj: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf8internal9ArenaImpl14SpaceAllocatedEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13ReflectionOps5MergeERKNS0_7MessageEPS3_: symbol not found Error relocating /usr/sbin/netdata: __select_time64: symbol not found Error relocating /usr/sbin/netdata: __pselect_time64: symbol not found Error relocating /usr/sbin/netdata: __utimes_time64: symbol not found Error relocating /usr/sbin/netdata: __gettimeofday_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal11InitSCCImplEPNS1_11SCCInfoBaseE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal13VerifyVersionEiiPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet9MergeFromERKS1_: symbol not found Error relocating /usr/sbin/netdata: __localtime64_r: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal11LogFinisheraSERNS1_10LogMessageE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf2io19EpsCopyOutputStream30WriteStringMaybeAliasedOutlineEjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPh: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet13ClearFallbackEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17UnknownFieldParseEyPNS0_15UnknownFieldSetEPKcPNS1_12ParseContextE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf2io19EpsCopyOutputStream19EnsureSpaceFallbackEPh: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessageC1ENS0_8LogLevelEPKci: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf23ShutdownProtobufLibraryEv: symbol not found Error relocating /usr/sbin/netdata: __stat_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal20RepeatedPtrFieldBase14InternalExtendEi: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessagelsEPKc: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14AddDescriptorsEPKNS1_15DescriptorTableE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal10LogMessageD1Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal9ArenaImpl4InitEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal24ComputeUnknownFieldsSizeERKNS1_16InternalMetadataEjPNS1_10CachedSizeE: symbol not found Error relocating /usr/sbin/netdata: __recvmmsg_time64: symbol not found Error relocating /usr/sbin/netdata: __clock_gettime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal24InlineGreedyStringParserEPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKcPNS1_12ParseContextE: symbol not found Error relocating /usr/sbin/netdata: __lstat_time64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal17VarintParseSlow64EPKcj: symbol not found Error relocating /usr/sbin/netdata: __localtime64: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf12FieldOptions16default_instanceEv: symbol not found Error relocating /usr/sbin/netdata: __time64: symbol not found Error relocating /usr/sbin/netdata: descriptor_table_google_2fprotobuf_2fdescriptor_2eproto: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message11GetTypeNameB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message25InitializationErrorStringB5cxx11Ev: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message21CheckTypeAndMergeFromERKNS0_11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf7Message20DiscardUnknownFieldsEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZNK6google8protobuf7Message13SpaceUsedLongEv: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZTIN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf8internal14DestroyMessageEPKv: symbol not found Error relocating /usr/sbin/netdata: _ZTVN6google8protobuf11MessageLiteE: symbol not found Error relocating /usr/sbin/netdata: _ZN6google8protobuf15UnknownFieldSet16default_instanceEv: symbol not found Error relocating /usr/sbin/netdata: _ZTVN6google8protobuf7MessageE: symbol not found Error relocating /usr/sbin/netdata: __stat_time64: symbol not found Error relocating /usr/sbin/netdata: __fstat_time64: symbol not found Error relocating /usr/sbin/netdata: __lstat_time64: symbol not found ``` ##### OS / Environment <!-- Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.) as possible to allow us reproduce this bug faster. To get this information, execute the following commands based on your operating system: - uname -a; grep -Hv "^#" /etc/*release # Linux - uname -a; uname -K # BSD - uname -a; sw_vers # macOS Place the output from the command in the code section below. --> ``` $ uname -a; grep -Hv "^#" /etc/*release Linux MYHOSTNAME 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l GNU/Linux /etc/os-release:PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" /etc/os-release:NAME="Raspbian GNU/Linux" /etc/os-release:VERSION_ID="10" /etc/os-release:VERSION="10 (buster)" /etc/os-release:VERSION_CODENAME=buster /etc/os-release:ID=raspbian /etc/os-release:ID_LIKE=debian /etc/os-release:HOME_URL="http://www.raspbian.org/" /etc/os-release:SUPPORT_URL="http://www.raspbian.org/RaspbianForums" /etc/os-release:BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" ``` ##### Netdata version <!-- Provide output of `netdata -V`. If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V --> The closest I can give is the image digest since the container fails to properly start: ``` $ docker pull netdata/netdata Using default tag: latest latest: Pulling from netdata/netdata Digest: sha256:e8d5fb55835a62580ed5aabeb518aebbe620030190845572de7d705e71286f63 Status: Image is up to date for netdata/netdata:latest docker.io/netdata/netdata:latest ``` ##### Component Name <!-- Let us know which component is affected by the bug. Our code is structured according to its component, so the component name is the same as the top level directory of the repository. For example, a bug in the dashboard would be under the web component. --> The entire Docker container. ##### Steps To Reproduce <!-- Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario. If you'd like to attach larger files, use gist.github.com and paste in links. --> The bug appeared before I updated my docker. Even after `apt update && apt full-upgrade`, the bug is still there. docker-compose.yml: ``` version: "3.8" services: netdata: image: netdata/netdata container_name: netdata restart: "no" security_opt: - apparmor:unconfined cap_add: - SYS_PTRACE environment: - TZ=${TZ} volumes: - /etc/localtime:/etc/localtime:ro - netdataconfig:/etc/netdata - netdatalib:/var/lib/netdata - netdatacache:/var/cache/netdata - /etc/os-release:/host/etc/os-release:ro - /etc/passwd:/host/etc/passwd:ro - /etc/group:/host/etc/group:ro - /proc:/host/proc:ro - /sys:/host/sys:ro - /var/run/docker.sock:/var/run/docker.sock:ro ``` ##### Expected behavior <!-- Provide a clear and concise description of what you expected to happen. --> As before the last update, I expected the container to start and have very well designed graphs :D Thanks for the container guys, I love it despite this bug :)
priority
latest update failure on docker when creating a bug report please verify first that your issue is not already reported on github test if the latest release and master branch are affected too bug report summary on startup after new update the container crashes with this log error loading shared library libprotobuf so no such file or directory needed by usr sbin netdata error relocating usr sbin netdata fstat symbol not found error relocating usr sbin netdata getrusage symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata r symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata nanosleep symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata traitsicesaiceee symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata infoj symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata select symbol not found error relocating usr sbin netdata pselect symbol not found error relocating usr sbin netdata utimes symbol not found error relocating usr sbin netdata gettimeofday symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata r symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata traitsicesaiceeeph symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata stat symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata recvmmsg symbol not found error relocating usr sbin netdata clock symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata lstat symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata instanceev symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata descriptor table google symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata address empty symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata instanceev symbol not found error relocating usr sbin netdata symbol not found error relocating usr sbin netdata stat symbol not found error relocating usr sbin netdata fstat symbol not found error relocating usr sbin netdata lstat symbol not found os environment provide as much information about your environment which operating system and distribution you re using if netdata is running in a container etc as possible to allow us reproduce this bug faster to get this information execute the following commands based on your operating system uname a grep hv etc release linux uname a uname k bsd uname a sw vers macos place the output from the command in the code section below uname a grep hv etc release linux myhostname smp mon dec gmt gnu linux etc os release pretty name raspbian gnu linux buster etc os release name raspbian gnu linux etc os release version id etc os release version buster etc os release version codename buster etc os release id raspbian etc os release id like debian etc os release home url etc os release support url etc os release bug report url netdata version provide output of netdata v if netdata is running execute ps aux grep e o netdata v the closest i can give is the image digest since the container fails to properly start docker pull netdata netdata using default tag latest latest pulling from netdata netdata digest status image is up to date for netdata netdata latest docker io netdata netdata latest component name let us know which component is affected by the bug our code is structured according to its component so the component name is the same as the top level directory of the repository for example a bug in the dashboard would be under the web component the entire docker container steps to reproduce describe how you found this bug and how we can reproduce it preferably with a minimal test case scenario if you d like to attach larger files use gist github com and paste in links the bug appeared before i updated my docker even after apt update apt full upgrade the bug is still there docker compose yml version services netdata image netdata netdata container name netdata restart no security opt apparmor unconfined cap add sys ptrace environment tz tz volumes etc localtime etc localtime ro netdataconfig etc netdata netdatalib var lib netdata netdatacache var cache netdata etc os release host etc os release ro etc passwd host etc passwd ro etc group host etc group ro proc host proc ro sys host sys ro var run docker sock var run docker sock ro expected behavior as before the last update i expected the container to start and have very well designed graphs d thanks for the container guys i love it despite this bug
1
629,279
20,027,793,238
IssuesEvent
2022-02-01 23:48:32
SahilSawantUSA/Scouterdeck-App
https://api.github.com/repos/SahilSawantUSA/Scouterdeck-App
closed
Add nextron base
Priority: High Type: Feature
Add a boilerplate for a nextron app. Will be used as the framework for the scouterdeck app.
1.0
Add nextron base - Add a boilerplate for a nextron app. Will be used as the framework for the scouterdeck app.
priority
add nextron base add a boilerplate for a nextron app will be used as the framework for the scouterdeck app
1
339,806
10,262,519,947
IssuesEvent
2019-08-22 12:31:26
ushahidi/opendesign
https://api.github.com/repos/ushahidi/opendesign
closed
Designit document feedback for Case Study
Content Highest Priority In progress Website/Hosting
Completing a series of questions in a document from Designit in orde to have the project featured as a case study on their website. Document: https://paper.dropbox.com/doc/Ushahidi-Case-study-questions--AiEkiyZaFQ4JhusWGyiZw7sAAg-pxxHEaL4AVoO2JL9KDSb0 Example case study webpage: https://www.designit.com/work/a-premium-self-service-food-and-beverage-experience
1.0
Designit document feedback for Case Study - Completing a series of questions in a document from Designit in orde to have the project featured as a case study on their website. Document: https://paper.dropbox.com/doc/Ushahidi-Case-study-questions--AiEkiyZaFQ4JhusWGyiZw7sAAg-pxxHEaL4AVoO2JL9KDSb0 Example case study webpage: https://www.designit.com/work/a-premium-self-service-food-and-beverage-experience
priority
designit document feedback for case study completing a series of questions in a document from designit in orde to have the project featured as a case study on their website document example case study webpage
1
143,723
5,522,434,837
IssuesEvent
2017-03-19 23:20:46
voteamerica/voteamerica.github.io
https://api.github.com/repos/voteamerica/voteamerica.github.io
closed
Driver was able to submit form without date/time information
high priority
Can we prevent this from the form, or I'll have to enforce it at DB level, but this will result in an error which will need to be handled by the UI.
1.0
Driver was able to submit form without date/time information - Can we prevent this from the form, or I'll have to enforce it at DB level, but this will result in an error which will need to be handled by the UI.
priority
driver was able to submit form without date time information can we prevent this from the form or i ll have to enforce it at db level but this will result in an error which will need to be handled by the ui
1
762,087
26,708,253,931
IssuesEvent
2023-01-27 20:22:33
microbiomedata/issues
https://api.github.com/repos/microbiomedata/issues
closed
JGI Plate layout format requirements
enhancement high priority SMALL
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - Currently, the plate layout required format in the submission portal is letter-number. Is it possible to add "can't be this letter-# combo" restrictions? JGI requires that the corners (A1, A12, H1, H12) be blank. "Plate location (well #): If you have indicated that the sample will be shipped in a plate, list the well location (ie A4, B5). The corner wells must be blank. For partial plates, fill the plate by columns rather than rows. Leave blank if the sample will be shipped in a tube. For more information on submitting samples in plates, please review the “Plate-based sample requirements” document at http://jgi.doe.gov/user-program-info/pmo-overview/project-materials-submission-overview/." **Describe the solution you'd like** A clear and concise description of what you want to happen. -Add additional plate layout formatting requirments **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Acceptance Criteria** Create a checklist or scenario-based acceptance criteria, from the users perspective, that answers the following: - What filling out the plate layout location in the template, A1 will flag as a formatting error - Add the above cells to the "Guidance" field for the column - Make plate layout / well # optional & not required (or add NA) (could submit via tube) Who will use this feature/enhancement? -JGI submitting users When will they use it? - When submitting samples to JGI and using NMDC template How will they use it? How will they test it to make sure it's working? - Test by doing validation Is the request achievable? During one sprint? - Yes What is your definition of done for this request? - Plate layout now has more rigorous formatting requirements that better reflect JGI requirements.
1.0
JGI Plate layout format requirements - **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - Currently, the plate layout required format in the submission portal is letter-number. Is it possible to add "can't be this letter-# combo" restrictions? JGI requires that the corners (A1, A12, H1, H12) be blank. "Plate location (well #): If you have indicated that the sample will be shipped in a plate, list the well location (ie A4, B5). The corner wells must be blank. For partial plates, fill the plate by columns rather than rows. Leave blank if the sample will be shipped in a tube. For more information on submitting samples in plates, please review the “Plate-based sample requirements” document at http://jgi.doe.gov/user-program-info/pmo-overview/project-materials-submission-overview/." **Describe the solution you'd like** A clear and concise description of what you want to happen. -Add additional plate layout formatting requirments **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Acceptance Criteria** Create a checklist or scenario-based acceptance criteria, from the users perspective, that answers the following: - What filling out the plate layout location in the template, A1 will flag as a formatting error - Add the above cells to the "Guidance" field for the column - Make plate layout / well # optional & not required (or add NA) (could submit via tube) Who will use this feature/enhancement? -JGI submitting users When will they use it? - When submitting samples to JGI and using NMDC template How will they use it? How will they test it to make sure it's working? - Test by doing validation Is the request achievable? During one sprint? - Yes What is your definition of done for this request? - Plate layout now has more rigorous formatting requirements that better reflect JGI requirements.
priority
jgi plate layout format requirements is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when currently the plate layout required format in the submission portal is letter number is it possible to add can t be this letter combo restrictions jgi requires that the corners be blank plate location well  if you have indicated that the sample will be shipped in a plate list the well location ie the corner wells must be blank for partial plates fill the plate by columns rather than rows leave blank if the sample will be shipped in a tube for more information on submitting samples in plates please review the “plate based sample requirements” document at describe the solution you d like a clear and concise description of what you want to happen add additional plate layout formatting requirments describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered acceptance criteria create a checklist or scenario based acceptance criteria from the users perspective that answers the following what filling out the plate layout location in the template will flag as a formatting error add the above cells to the guidance field for the column make plate layout well optional not required or add na could submit via tube who will use this feature enhancement jgi submitting users when will they use it when submitting samples to jgi and using nmdc template how will they use it how will they test it to make sure it s working test by doing validation is the request achievable during one sprint yes what is your definition of done for this request plate layout now has more rigorous formatting requirements that better reflect jgi requirements
1
29,788
2,717,418,716
IssuesEvent
2015-04-11 08:02:37
Araq/Nim
https://api.github.com/repos/Araq/Nim
closed
Compiler segfault (related to ref object)
High Priority type sections
This code segfaults: ```nimrod type GenericNodeObj[T] = ref object obj: T Node* = ref object children*: seq[Node] parent*: Node nodeObj*: GenericNodeObj[int] proc newNode*(nodeObj: GenericNodeObj): Node = result = Node(nodeObj: nodeObj) newSeq(result.children, 10) var genericObj = GenericNodeObj[int]() var myNode = newNode(genericObj) ``` Splitting the `ref object` to `ref` and `object` fixes the issue. ```nimrod type InnerGenericNodeObj[T] = object obj: T GenericNodeObj[T] = ref InnerGenericNodeObj[T] Node* = ref object children*: seq[Node] parent*: Node nodeObj*: GenericNodeObj[int] proc newNode*(nodeObj: GenericNodeObj): Node = result = Node(nodeObj: nodeObj) newSeq(result.children, 10) var genericObj = GenericNodeObj[int]() var myNode = newNode(genericObj) ```
1.0
Compiler segfault (related to ref object) - This code segfaults: ```nimrod type GenericNodeObj[T] = ref object obj: T Node* = ref object children*: seq[Node] parent*: Node nodeObj*: GenericNodeObj[int] proc newNode*(nodeObj: GenericNodeObj): Node = result = Node(nodeObj: nodeObj) newSeq(result.children, 10) var genericObj = GenericNodeObj[int]() var myNode = newNode(genericObj) ``` Splitting the `ref object` to `ref` and `object` fixes the issue. ```nimrod type InnerGenericNodeObj[T] = object obj: T GenericNodeObj[T] = ref InnerGenericNodeObj[T] Node* = ref object children*: seq[Node] parent*: Node nodeObj*: GenericNodeObj[int] proc newNode*(nodeObj: GenericNodeObj): Node = result = Node(nodeObj: nodeObj) newSeq(result.children, 10) var genericObj = GenericNodeObj[int]() var myNode = newNode(genericObj) ```
priority
compiler segfault related to ref object this code segfaults nimrod type genericnodeobj ref object obj t node ref object children seq parent node nodeobj genericnodeobj proc newnode nodeobj genericnodeobj node result node nodeobj nodeobj newseq result children var genericobj genericnodeobj var mynode newnode genericobj splitting the ref object to ref and object fixes the issue nimrod type innergenericnodeobj object obj t genericnodeobj ref innergenericnodeobj node ref object children seq parent node nodeobj genericnodeobj proc newnode nodeobj genericnodeobj node result node nodeobj nodeobj newseq result children var genericobj genericnodeobj var mynode newnode genericobj
1
302,974
9,301,053,401
IssuesEvent
2019-03-23 18:38:42
Cherdyakov/sonification-workstation
https://api.github.com/repos/Cherdyakov/sonification-workstation
closed
Remove value spinboxes from SynthItem parameters, let the mapper input stand in for both.
HIGH Priority feature
No reason for a SpinBox once function mapping is complete. Can just enter a fixed value directly into the map text box. E.g. "440.0"
1.0
Remove value spinboxes from SynthItem parameters, let the mapper input stand in for both. - No reason for a SpinBox once function mapping is complete. Can just enter a fixed value directly into the map text box. E.g. "440.0"
priority
remove value spinboxes from synthitem parameters let the mapper input stand in for both no reason for a spinbox once function mapping is complete can just enter a fixed value directly into the map text box e g
1
166,818
6,312,356,279
IssuesEvent
2017-07-24 02:58:08
btrussin/UMD_VR
https://api.github.com/repos/btrussin/UMD_VR
closed
Rings with connected nodes should be bright
high priority (pre user study) ready
Rings with connected nodes should be set to the brightest setting. The end result should be a highly visible subgraph with bright edges and bright rings, while unselected nodes and rings are dark and not as salient. Currently, if you select a node and an edge connects to a node on a different ring, the brightness seems to be between the darkest and brightest shade. You can see the difference in brightness by touching a ring with a connected node.
1.0
Rings with connected nodes should be bright - Rings with connected nodes should be set to the brightest setting. The end result should be a highly visible subgraph with bright edges and bright rings, while unselected nodes and rings are dark and not as salient. Currently, if you select a node and an edge connects to a node on a different ring, the brightness seems to be between the darkest and brightest shade. You can see the difference in brightness by touching a ring with a connected node.
priority
rings with connected nodes should be bright rings with connected nodes should be set to the brightest setting the end result should be a highly visible subgraph with bright edges and bright rings while unselected nodes and rings are dark and not as salient currently if you select a node and an edge connects to a node on a different ring the brightness seems to be between the darkest and brightest shade you can see the difference in brightness by touching a ring with a connected node
1
484,145
13,934,992,567
IssuesEvent
2020-10-22 10:50:16
openshift/odo
https://api.github.com/repos/openshift/odo
closed
"odo create nodejs --downloadSource" and "--starter" creating one extra file on windows platform
area/Windows area/devfile kind/bug priority/High
/kind bug <!-- Welcome! - We kindly ask you to: 1. Fill out the issue template below 2. Use the Google group if you have a question rather than a bug or feature request. The group is at: https://groups.google.com/forum/#!forum/odo-users Thanks for understanding, and for contributing to the project! --> ## What versions of software are you using? **Operating System:** win10 **Output of `odo version`:** master ## How did you run odo exactly? Ran `make test-cmd-devfile-create` on windows. Also Ran the steps manually. 1. created new context dir with experimental mode enable 2. `odo project create testd` then cd <new context> 3. `odo create nodejs --downloadSource` or `odo create nodejs --downloadSource nodejs-web-app` ## Actual behavior On Windows ``` > mkdir -p testano Directory: C:\Users\Admin\go\src\github.com\openshift\odo Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:05 AM testano > odo project create testanother V Project 'testanother' is ready for use V New project created and now using project: testanother > cd .\testano\ > odo create nodejs --downloadSource Experimental mode is enabled, use at your own risk Validation V Checking devfile compatibility [0ns] V Creating a devfile component from registry: DefaultDevfileRegistry [4ms] V Validating devfile component [584800ns] Please use `odo push` command to create the component with source deployed > ls Directory: C:\Users\Admin\go\src\github.com\openshift\odo\testano Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:10 AM .odo d----- 6/17/2020 8:10 AM odo-devfiles-nodejs-ex-39b5ec7 -a---- 6/17/2020 8:10 AM 731 devfile.yaml > ls .\odo-devfiles-nodejs-ex-39b5ec7\ Directory: C:\Users\Admin\go\src\github.com\openshift\odo\testano\odo-devfiles-nodejs-ex-39b5ec7 Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:10 AM app -a---- 6/17/2020 8:10 AM 14 .gitignore -a---- 6/17/2020 8:10 AM 14197 LICENSE -a---- 6/17/2020 8:10 AM 14289 package-lock.json -a---- 6/17/2020 8:10 AM 265 package.json -a---- 6/17/2020 8:10 AM 449 README.md ``` Extra file created here is `odo-devfiles-nodejs-ex-39b5ec7` which leads [`--downloadSource`](https://github.com/openshift/odo/blob/master/tests/integration/devfile/cmd_devfile_create_test.go#L208) and [`--downloadSource=nodejs-web-app`](https://github.com/openshift/odo/blob/master/tests/integration/devfile/cmd_devfile_create_test.go#L228) to fail. All those expected files which are supposed to be under current directory is actually found under `./odo-devfiles-nodejs-ex-39b5ec7` on windows. Really a weird behavior but somehow it is happening. ## Expected behavior `odo create nodejs --downloadSource` or `--downloadSource=nodejs-web-app` should not create one extra file `odo-devfiles-nodejs-ex-39b5ec7` on windows. ## Any logs, error output, etc? 👀
1.0
"odo create nodejs --downloadSource" and "--starter" creating one extra file on windows platform - /kind bug <!-- Welcome! - We kindly ask you to: 1. Fill out the issue template below 2. Use the Google group if you have a question rather than a bug or feature request. The group is at: https://groups.google.com/forum/#!forum/odo-users Thanks for understanding, and for contributing to the project! --> ## What versions of software are you using? **Operating System:** win10 **Output of `odo version`:** master ## How did you run odo exactly? Ran `make test-cmd-devfile-create` on windows. Also Ran the steps manually. 1. created new context dir with experimental mode enable 2. `odo project create testd` then cd <new context> 3. `odo create nodejs --downloadSource` or `odo create nodejs --downloadSource nodejs-web-app` ## Actual behavior On Windows ``` > mkdir -p testano Directory: C:\Users\Admin\go\src\github.com\openshift\odo Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:05 AM testano > odo project create testanother V Project 'testanother' is ready for use V New project created and now using project: testanother > cd .\testano\ > odo create nodejs --downloadSource Experimental mode is enabled, use at your own risk Validation V Checking devfile compatibility [0ns] V Creating a devfile component from registry: DefaultDevfileRegistry [4ms] V Validating devfile component [584800ns] Please use `odo push` command to create the component with source deployed > ls Directory: C:\Users\Admin\go\src\github.com\openshift\odo\testano Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:10 AM .odo d----- 6/17/2020 8:10 AM odo-devfiles-nodejs-ex-39b5ec7 -a---- 6/17/2020 8:10 AM 731 devfile.yaml > ls .\odo-devfiles-nodejs-ex-39b5ec7\ Directory: C:\Users\Admin\go\src\github.com\openshift\odo\testano\odo-devfiles-nodejs-ex-39b5ec7 Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 6/17/2020 8:10 AM app -a---- 6/17/2020 8:10 AM 14 .gitignore -a---- 6/17/2020 8:10 AM 14197 LICENSE -a---- 6/17/2020 8:10 AM 14289 package-lock.json -a---- 6/17/2020 8:10 AM 265 package.json -a---- 6/17/2020 8:10 AM 449 README.md ``` Extra file created here is `odo-devfiles-nodejs-ex-39b5ec7` which leads [`--downloadSource`](https://github.com/openshift/odo/blob/master/tests/integration/devfile/cmd_devfile_create_test.go#L208) and [`--downloadSource=nodejs-web-app`](https://github.com/openshift/odo/blob/master/tests/integration/devfile/cmd_devfile_create_test.go#L228) to fail. All those expected files which are supposed to be under current directory is actually found under `./odo-devfiles-nodejs-ex-39b5ec7` on windows. Really a weird behavior but somehow it is happening. ## Expected behavior `odo create nodejs --downloadSource` or `--downloadSource=nodejs-web-app` should not create one extra file `odo-devfiles-nodejs-ex-39b5ec7` on windows. ## Any logs, error output, etc? 👀
priority
odo create nodejs downloadsource and starter creating one extra file on windows platform kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system output of odo version master how did you run odo exactly ran make test cmd devfile create on windows also ran the steps manually created new context dir with experimental mode enable odo project create testd then cd odo create nodejs downloadsource or odo create nodejs downloadsource nodejs web app actual behavior on windows mkdir p testano directory c users admin go src github com openshift odo mode lastwritetime length name d am testano odo project create testanother v project testanother is ready for use v new project created and now using project testanother cd testano odo create nodejs downloadsource experimental mode is enabled use at your own risk validation v checking devfile compatibility v creating a devfile component from registry defaultdevfileregistry v validating devfile component please use odo push command to create the component with source deployed ls directory c users admin go src github com openshift odo testano mode lastwritetime length name d am odo d am odo devfiles nodejs ex a am devfile yaml ls odo devfiles nodejs ex directory c users admin go src github com openshift odo testano odo devfiles nodejs ex mode lastwritetime length name d am app a am gitignore a am license a am package lock json a am package json a am readme md extra file created here is odo devfiles nodejs ex which leads and to fail all those expected files which are supposed to be under current directory is actually found under odo devfiles nodejs ex on windows really a weird behavior but somehow it is happening expected behavior odo create nodejs downloadsource or downloadsource nodejs web app should not create one extra file odo devfiles nodejs ex on windows any logs error output etc 👀
1
103,502
4,174,312,945
IssuesEvent
2016-06-21 13:43:08
myui/hivemall
https://api.github.com/repos/myui/hivemall
opened
Fixed a corner case bug in mf_predict
bug call-for-contribution high-priority hotfix
`mf_predict` UDF should be GenericUDF to cope with this issue. ``` hive> select mf_predict(null,null,0.0,0.0); 0.0 ``` ``` hive> select mf_predict(null,null,0.0,null); FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments 'TOK_NULL': org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public org.apache.hadoop.io.FloatWritable hivemall.mf.MFPredictionUDF.evaluate(java.util.List,java.util.List,double,double) throws org.apache.hadoop.hive.ql.metadata.HiveException on object hivemall.mf.MFPredictionUDF@16e07bae of class hivemall.mf.MFPredictionUDF with arguments {null, null, 0.0:java.lang.Double, null} of size 4 ```
1.0
Fixed a corner case bug in mf_predict - `mf_predict` UDF should be GenericUDF to cope with this issue. ``` hive> select mf_predict(null,null,0.0,0.0); 0.0 ``` ``` hive> select mf_predict(null,null,0.0,null); FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments 'TOK_NULL': org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method public org.apache.hadoop.io.FloatWritable hivemall.mf.MFPredictionUDF.evaluate(java.util.List,java.util.List,double,double) throws org.apache.hadoop.hive.ql.metadata.HiveException on object hivemall.mf.MFPredictionUDF@16e07bae of class hivemall.mf.MFPredictionUDF with arguments {null, null, 0.0:java.lang.Double, null} of size 4 ```
priority
fixed a corner case bug in mf predict mf predict udf should be genericudf to cope with this issue hive select mf predict null null hive select mf predict null null null failed semanticexception line wrong arguments tok null org apache hadoop hive ql metadata hiveexception unable to execute method public org apache hadoop io floatwritable hivemall mf mfpredictionudf evaluate java util list java util list double double throws org apache hadoop hive ql metadata hiveexception on object hivemall mf mfpredictionudf of class hivemall mf mfpredictionudf with arguments null null java lang double null of size
1
795,061
28,059,661,064
IssuesEvent
2023-03-29 11:48:08
AY2223S2-CS2103T-W14-1/tp
https://api.github.com/repos/AY2223S2-CS2103T-W14-1/tp
closed
[UG] Fix syntax bolding for client, add symbols and other enhancements
type.docs priority.high
Some of the texts are not consistent in terms of their syntax and highlights (expected output not bolded, "You should see the following output if the client is created successfully.", etc) - [x] [UG] -cn client_name The client’s name. Alphanumerical characters only. - [x] Fix Message_not_edited constant in UpdateClientCommand.java - [x] Fix Message_success constant in UpdateClientCommand.java - [x] [UG] Add note on what is a valid year of birth
1.0
[UG] Fix syntax bolding for client, add symbols and other enhancements - Some of the texts are not consistent in terms of their syntax and highlights (expected output not bolded, "You should see the following output if the client is created successfully.", etc) - [x] [UG] -cn client_name The client’s name. Alphanumerical characters only. - [x] Fix Message_not_edited constant in UpdateClientCommand.java - [x] Fix Message_success constant in UpdateClientCommand.java - [x] [UG] Add note on what is a valid year of birth
priority
fix syntax bolding for client add symbols and other enhancements some of the texts are not consistent in terms of their syntax and highlights expected output not bolded you should see the following output if the client is created successfully etc cn client name the client’s name alphanumerical characters only fix message not edited constant in updateclientcommand java fix message success constant in updateclientcommand java add note on what is a valid year of birth
1
430,370
12,452,168,490
IssuesEvent
2020-05-27 11:49:14
SeisSol/SeisSol
https://api.github.com/repos/SeisSol/SeisSol
closed
error in submodules
high priority
submodules/glm/glm/detail/../detail/func_common.inl(631): error: the global scope has no "isnan" return ::isnan(x) != 0; ^ submodules/glm/glm/detail/../detail/func_common.inl(665): error: the global scope has no "isinf" return ::isinf(x); ^ I manually fixed it to be std::isnan and std::isinf and now it works.
1.0
error in submodules - submodules/glm/glm/detail/../detail/func_common.inl(631): error: the global scope has no "isnan" return ::isnan(x) != 0; ^ submodules/glm/glm/detail/../detail/func_common.inl(665): error: the global scope has no "isinf" return ::isinf(x); ^ I manually fixed it to be std::isnan and std::isinf and now it works.
priority
error in submodules submodules glm glm detail detail func common inl error the global scope has no isnan return isnan x submodules glm glm detail detail func common inl error the global scope has no isinf return isinf x i manually fixed it to be std isnan and std isinf and now it works
1
187,251
6,750,448,804
IssuesEvent
2017-10-23 04:56:04
ballerinalang/composer
https://api.github.com/repos/ballerinalang/composer
closed
Structs UI does not show User defined and Array Types
Priority/High Severity/Major
Structs UI does not show User defined and Array Types. Please refer following code and screenshot. We retrieving name, we need to consider User defined types and multiple dimension array types. ``` struct Person { string name; int age; string city; } struct User { string username; string location; int age; Person p; Person [] pArray ; Person [][] pArray2D ; int[] ageArray; } ``` <img width="1092" alt="screen shot 2017-10-20 at 6 44 04 pm" src="https://user-images.githubusercontent.com/2918812/31822239-b3df67d2-b5c5-11e7-81cb-9aeda7ddaca8.png">
1.0
Structs UI does not show User defined and Array Types - Structs UI does not show User defined and Array Types. Please refer following code and screenshot. We retrieving name, we need to consider User defined types and multiple dimension array types. ``` struct Person { string name; int age; string city; } struct User { string username; string location; int age; Person p; Person [] pArray ; Person [][] pArray2D ; int[] ageArray; } ``` <img width="1092" alt="screen shot 2017-10-20 at 6 44 04 pm" src="https://user-images.githubusercontent.com/2918812/31822239-b3df67d2-b5c5-11e7-81cb-9aeda7ddaca8.png">
priority
structs ui does not show user defined and array types structs ui does not show user defined and array types please refer following code and screenshot we retrieving name we need to consider user defined types and multiple dimension array types struct person string name int age string city struct user string username string location int age person p person parray person int agearray img width alt screen shot at pm src
1
180,368
6,648,987,335
IssuesEvent
2017-09-28 11:30:12
OpenSRP/opensrp-server
https://api.github.com/repos/OpenSRP/opensrp-server
closed
ZEIR ID updates do not sync to OMRS from couch
bug High Priority PATH BID Zambia
In ZEIR app, if a user edits the birth registration form and updates the ZEIR ID by scanning a new QR code, the new ZEIR ID reflects in the app and also syncs to couch, but it seems to fail to sync to OMRS and update the ZEIR ID in OMRS.
1.0
ZEIR ID updates do not sync to OMRS from couch - In ZEIR app, if a user edits the birth registration form and updates the ZEIR ID by scanning a new QR code, the new ZEIR ID reflects in the app and also syncs to couch, but it seems to fail to sync to OMRS and update the ZEIR ID in OMRS.
priority
zeir id updates do not sync to omrs from couch in zeir app if a user edits the birth registration form and updates the zeir id by scanning a new qr code the new zeir id reflects in the app and also syncs to couch but it seems to fail to sync to omrs and update the zeir id in omrs
1
822,436
30,872,044,456
IssuesEvent
2023-08-03 12:03:15
woocommerce/woocommerce-gateway-stripe
https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe
reopened
$0/month Subscriptions unable to process on Stripe 7.3
priority: high type: bug
1) 6243181-zen User has the following setup: 1) Variable Subs with switching enabled 2) Switch settings: ![https://d.pr/i/5t5nd5](https://d.pr/i/5t5nd5+) Full Size: https://d.pr/i/5t5nd5 Issue: When we have an active Subscription with $0/month cost (Virtual), upgrading to a paid subscription doesn't work with Stripe: (only occurs when prorating) ![https://d.pr/i/q2kqvL](https://d.pr/i/q2kqvL+) Full Size: https://d.pr/i/q2kqvL > parameter_invalid_integer - payment_method_options[card][mandate_options][amount] This value must be greater than or equal to 1. Able to replicate on my test site. ``` { "capture_method": "automatic", "description": "Woo Site Generator - Order 15", "payment_method_options": { "card": { "mandate_options": { "start_date": "1682753385", "amount_type": "fixed", "interval": "month", "interval_count": "1", "reference": "15", "supported_types": { "0": "india" }, "amount": "0" } } }, "metadata": { "payment_type": "recurring", "order_id": "15", "site_url": "http://localhost:8888/woogenerator/sites/wp-8559227", "customer_email": "xx@woocommerce.com", "save_payment_method": "true", "customer_name": "Test test" }, "setup_future_usage": "off_session", "customer": ["cus_xx"](https://dashboard.stripe.com/test/customers/xx), "currency": "usd", "payment_method": "pm_xx", "amount": "2400", "payment_method_types": { "0": "card" } } ``` 2) 6253081-zen Merchant's prepaid subscriptions ($0/month with a signup fee) aren't being processed. > Sorry, we are unable to process your payment at this time. Please retry later.
1.0
$0/month Subscriptions unable to process on Stripe 7.3 - 1) 6243181-zen User has the following setup: 1) Variable Subs with switching enabled 2) Switch settings: ![https://d.pr/i/5t5nd5](https://d.pr/i/5t5nd5+) Full Size: https://d.pr/i/5t5nd5 Issue: When we have an active Subscription with $0/month cost (Virtual), upgrading to a paid subscription doesn't work with Stripe: (only occurs when prorating) ![https://d.pr/i/q2kqvL](https://d.pr/i/q2kqvL+) Full Size: https://d.pr/i/q2kqvL > parameter_invalid_integer - payment_method_options[card][mandate_options][amount] This value must be greater than or equal to 1. Able to replicate on my test site. ``` { "capture_method": "automatic", "description": "Woo Site Generator - Order 15", "payment_method_options": { "card": { "mandate_options": { "start_date": "1682753385", "amount_type": "fixed", "interval": "month", "interval_count": "1", "reference": "15", "supported_types": { "0": "india" }, "amount": "0" } } }, "metadata": { "payment_type": "recurring", "order_id": "15", "site_url": "http://localhost:8888/woogenerator/sites/wp-8559227", "customer_email": "xx@woocommerce.com", "save_payment_method": "true", "customer_name": "Test test" }, "setup_future_usage": "off_session", "customer": ["cus_xx"](https://dashboard.stripe.com/test/customers/xx), "currency": "usd", "payment_method": "pm_xx", "amount": "2400", "payment_method_types": { "0": "card" } } ``` 2) 6253081-zen Merchant's prepaid subscriptions ($0/month with a signup fee) aren't being processed. > Sorry, we are unable to process your payment at this time. Please retry later.
priority
month subscriptions unable to process on stripe zen user has the following setup variable subs with switching enabled switch settings full size issue when we have an active subscription with month cost virtual upgrading to a paid subscription doesn t work with stripe only occurs when prorating full size parameter invalid integer payment method options this value must be greater than or equal to able to replicate on my test site capture method automatic description woo site generator order payment method options card mandate options start date amount type fixed interval month interval count reference supported types india amount metadata payment type recurring order id site url customer email xx woocommerce com save payment method true customer name test test setup future usage off session customer currency usd payment method pm xx amount payment method types card zen merchant s prepaid subscriptions month with a signup fee aren t being processed sorry we are unable to process your payment at this time please retry later
1
385,763
11,425,344,048
IssuesEvent
2020-02-03 19:38:51
status-im/status-go
https://api.github.com/repos/status-im/status-go
closed
Balance is not updated after sending / receiving transaction and sometimes no transaction is shown in transaction history
bug high priority v1 release
# Problem The balance on multi-account and individual account view is not updated after sending or receiving transaction until relogin. Original issue https://github.com/status-im/status-react/issues/9361 Related issue https://github.com/status-im/status-react/issues/9280
1.0
Balance is not updated after sending / receiving transaction and sometimes no transaction is shown in transaction history - # Problem The balance on multi-account and individual account view is not updated after sending or receiving transaction until relogin. Original issue https://github.com/status-im/status-react/issues/9361 Related issue https://github.com/status-im/status-react/issues/9280
priority
balance is not updated after sending receiving transaction and sometimes no transaction is shown in transaction history problem the balance on multi account and individual account view is not updated after sending or receiving transaction until relogin original issue related issue
1
800,644
28,373,549,531
IssuesEvent
2023-04-12 18:54:47
OpenLiberty/liberty-tools-intellij
https://api.github.com/repos/OpenLiberty/liberty-tools-intellij
closed
Handle Liberty Language Server quick fixes of the Edit type
language client high priority 3
Some quick fixes from Lemminx require the client to edit the text of the file. ``` WorkspaceEdit [ edits = ArrayList ( range = <Range> newText = "some text" ) ] ``` We need code to delete text in the range indicated and insert the new text provided. Note that a manual Undo operation in the editor should also be supported. May be handled by #194 Part of #81 Defined in LSP: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspaceEdit
1.0
Handle Liberty Language Server quick fixes of the Edit type - Some quick fixes from Lemminx require the client to edit the text of the file. ``` WorkspaceEdit [ edits = ArrayList ( range = <Range> newText = "some text" ) ] ``` We need code to delete text in the range indicated and insert the new text provided. Note that a manual Undo operation in the editor should also be supported. May be handled by #194 Part of #81 Defined in LSP: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspaceEdit
priority
handle liberty language server quick fixes of the edit type some quick fixes from lemminx require the client to edit the text of the file workspaceedit edits arraylist range newtext some text we need code to delete text in the range indicated and insert the new text provided note that a manual undo operation in the editor should also be supported may be handled by part of defined in lsp
1
247,776
7,923,554,054
IssuesEvent
2018-07-05 14:23:20
OpenBazaar/openbazaar-go
https://api.github.com/repos/OpenBazaar/openbazaar-go
closed
Return from `POST /ob/releaseescrow` when dispute is missing
bug high priority
Caught by @cpacia https://github.com/OpenBazaar/openbazaar-go/blob/e0ea931aef7b26ff3ecb2b389735d38a4cb8c230/core/completion.go#L281 I don't think a dispute needs to be open for them to release funds after the timeout right? Resolution: Use a `dispute != nil` to guard the dispute-related error checking instead of returning on `dispute == nil`.
1.0
Return from `POST /ob/releaseescrow` when dispute is missing - Caught by @cpacia https://github.com/OpenBazaar/openbazaar-go/blob/e0ea931aef7b26ff3ecb2b389735d38a4cb8c230/core/completion.go#L281 I don't think a dispute needs to be open for them to release funds after the timeout right? Resolution: Use a `dispute != nil` to guard the dispute-related error checking instead of returning on `dispute == nil`.
priority
return from post ob releaseescrow when dispute is missing caught by cpacia i don t think a dispute needs to be open for them to release funds after the timeout right resolution use a dispute nil to guard the dispute related error checking instead of returning on dispute nil
1
302,986
9,301,122,416
IssuesEvent
2019-03-23 19:12:46
richmondrcmp/mobileapp
https://api.github.com/repos/richmondrcmp/mobileapp
opened
Icon badge notification is not appearing on the mobile app's icon
Bug High Priority
When a new general notification comes into the mobile app the alert appears but the badge notification does not a appear with an alert count.
1.0
Icon badge notification is not appearing on the mobile app's icon - When a new general notification comes into the mobile app the alert appears but the badge notification does not a appear with an alert count.
priority
icon badge notification is not appearing on the mobile app s icon when a new general notification comes into the mobile app the alert appears but the badge notification does not a appear with an alert count
1
360,563
10,694,423,693
IssuesEvent
2019-10-23 10:49:41
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.8.2.9 release-preview] System.Exception: Error(s) occurred when loading Plants
Fixed High Priority
I can't reproduce exception when I repeat my actions which will lead me to this error, but i can describe it: I have backup each hour. And the approximate my actions that were with the plants for this hour: 1. I plant 3 Saguaro seed and 5 Johua Seed. 2. Use /fastforward 1 to grow tomatoes and trees 3. Have some relogs to update tomato/trees model after fastforward. 4. Use tractor with Harvester to harvest part of tomato. 5. Use /level 100,100 and use /spawnallplants. Harvest plants. 6. I use Skid Steer and Excavator, but I think it's never mind 7. After that I exit from server, foget to close it. Start new random server from client. Cancel starting new random server, open This server UI and close server (with exit). I tried to reproduce all my actions yesterday but it didn't help. Save and Log Files: [log_190723080923.log](https://github.com/StrangeLoopGames/EcoIssues/files/3420412/log_190723080923.log) https://drive.google.com/file/d/1iVKJSEJ7__ZEcrJijE6SfzIp2wIKP4gr/view?usp=sharing
1.0
[0.8.2.9 release-preview] System.Exception: Error(s) occurred when loading Plants - I can't reproduce exception when I repeat my actions which will lead me to this error, but i can describe it: I have backup each hour. And the approximate my actions that were with the plants for this hour: 1. I plant 3 Saguaro seed and 5 Johua Seed. 2. Use /fastforward 1 to grow tomatoes and trees 3. Have some relogs to update tomato/trees model after fastforward. 4. Use tractor with Harvester to harvest part of tomato. 5. Use /level 100,100 and use /spawnallplants. Harvest plants. 6. I use Skid Steer and Excavator, but I think it's never mind 7. After that I exit from server, foget to close it. Start new random server from client. Cancel starting new random server, open This server UI and close server (with exit). I tried to reproduce all my actions yesterday but it didn't help. Save and Log Files: [log_190723080923.log](https://github.com/StrangeLoopGames/EcoIssues/files/3420412/log_190723080923.log) https://drive.google.com/file/d/1iVKJSEJ7__ZEcrJijE6SfzIp2wIKP4gr/view?usp=sharing
priority
system exception error s occurred when loading plants i can t reproduce exception when i repeat my actions which will lead me to this error but i can describe it i have backup each hour and the approximate my actions that were with the plants for this hour i plant saguaro seed and johua seed use fastforward to grow tomatoes and trees have some relogs to update tomato trees model after fastforward use tractor with harvester to harvest part of tomato use level and use spawnallplants harvest plants i use skid steer and excavator but i think it s never mind after that i exit from server foget to close it start new random server from client cancel starting new random server open this server ui and close server with exit i tried to reproduce all my actions yesterday but it didn t help save and log files
1
587,747
17,630,665,138
IssuesEvent
2021-08-19 07:32:04
woowa-techcamp-2021/store-6
https://api.github.com/repos/woowa-techcamp-2021/store-6
opened
[FE] 검색 컴포넌트
high priority feat
## :hammer: 기능 설명 ### 검색 컴포넌트 구현 ## 📑 완료 조건 - [ ] 검색 컴포넌트 publishing - [ ] click 할 시 최근 검색어 렌더링 - [ ] blur 할 시 최근 검색어 컴포넌트가 보이지 않아야 함. - [ ] enter 했을 때 상태 변경을 할 수 있어야 함 - [ ] 첫 렌더링 시 localStorage에 최근 검색어 값을 가져와야 함 - [ ] 최근 검색어에서 X 를 클릭 시 값을 삭제해야 함. - [ ] 검색 요청 시 value를 localStorage에 저장할 수 있어야 함. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름] [FE] 상품 목록 페이지 - 검색 컴포넌트
1.0
[FE] 검색 컴포넌트 - ## :hammer: 기능 설명 ### 검색 컴포넌트 구현 ## 📑 완료 조건 - [ ] 검색 컴포넌트 publishing - [ ] click 할 시 최근 검색어 렌더링 - [ ] blur 할 시 최근 검색어 컴포넌트가 보이지 않아야 함. - [ ] enter 했을 때 상태 변경을 할 수 있어야 함 - [ ] 첫 렌더링 시 localStorage에 최근 검색어 값을 가져와야 함 - [ ] 최근 검색어에서 X 를 클릭 시 값을 삭제해야 함. - [ ] 검색 요청 시 value를 localStorage에 저장할 수 있어야 함. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름] [FE] 상품 목록 페이지 - 검색 컴포넌트
priority
검색 컴포넌트 hammer 기능 설명 검색 컴포넌트 구현 📑 완료 조건 검색 컴포넌트 publishing click 할 시 최근 검색어 렌더링 blur 할 시 최근 검색어 컴포넌트가 보이지 않아야 함 enter 했을 때 상태 변경을 할 수 있어야 함 첫 렌더링 시 localstorage에 최근 검색어 값을 가져와야 함 최근 검색어에서 x 를 클릭 시 값을 삭제해야 함 검색 요청 시 value를 localstorage에 저장할 수 있어야 함 thought balloon 관련 backlog 상품 목록 페이지 검색 컴포넌트
1
747,838
26,100,664,522
IssuesEvent
2022-12-27 06:28:00
bounswe/bounswe2022group4
https://api.github.com/repos/bounswe/bounswe2022group4
closed
Mobile: Doctor Existence Should Be Added
Category - To Do Category - Enhancement Priority - High Status: Completed Difficulty - Medium Language - Kotlin Mobile
### Description: As there is a some kind a health care forum, there must be doctor profiles on mobile app absolutely. ### What to do: - [x] #571 - [x] #572 - [x] #573 ### Deadline 27.11.2022, 12.00(GMT+3)
1.0
Mobile: Doctor Existence Should Be Added - ### Description: As there is a some kind a health care forum, there must be doctor profiles on mobile app absolutely. ### What to do: - [x] #571 - [x] #572 - [x] #573 ### Deadline 27.11.2022, 12.00(GMT+3)
priority
mobile doctor existence should be added description as there is a some kind a health care forum there must be doctor profiles on mobile app absolutely what to do deadline gmt
1
687,638
23,533,795,876
IssuesEvent
2022-08-19 18:09:13
crux-toolkit/crux-toolkit
https://api.github.com/repos/crux-toolkit/crux-toolkit
closed
Upgrade Percolator to version 3.6
High priority Percolator
Here is the list of changes in Percolator 3.6: v3.06 Added extended unit tests. Added code for quicker FDR calculation. Updated urls to xerces and google test Switched compiler to c++11 Added basic pepXML support. Improved testing in Github actions: Made separate stages for unit and system tests for Ubuntu Introduced a test build for centos Moved unit and system tests to release code. Changed the dependency structure to gtest Added command-line argument to switch off reporting to analytics Added possibility to include spectrum file name in tab-delimited input, that is carried over to tab-delimited output. Added possibility to include retention time in tab-delimited input. Discontinued support of the Elude model Removed all retention time modeling and all other DOC features One thing that is not mentioned above is that Lukas added support for a filename column for tab delimited input and output. If percolator detects a column named "filename" in input, it will carry over its content to output files. We should make sure Crux makes use of this.
1.0
Upgrade Percolator to version 3.6 - Here is the list of changes in Percolator 3.6: v3.06 Added extended unit tests. Added code for quicker FDR calculation. Updated urls to xerces and google test Switched compiler to c++11 Added basic pepXML support. Improved testing in Github actions: Made separate stages for unit and system tests for Ubuntu Introduced a test build for centos Moved unit and system tests to release code. Changed the dependency structure to gtest Added command-line argument to switch off reporting to analytics Added possibility to include spectrum file name in tab-delimited input, that is carried over to tab-delimited output. Added possibility to include retention time in tab-delimited input. Discontinued support of the Elude model Removed all retention time modeling and all other DOC features One thing that is not mentioned above is that Lukas added support for a filename column for tab delimited input and output. If percolator detects a column named "filename" in input, it will carry over its content to output files. We should make sure Crux makes use of this.
priority
upgrade percolator to version here is the list of changes in percolator added extended unit tests added code for quicker fdr calculation updated urls to xerces and google test switched compiler to c added basic pepxml support improved testing in github actions made separate stages for unit and system tests for ubuntu introduced a test build for centos moved unit and system tests to release code changed the dependency structure to gtest added command line argument to switch off reporting to analytics added possibility to include spectrum file name in tab delimited input that is carried over to tab delimited output added possibility to include retention time in tab delimited input discontinued support of the elude model removed all retention time modeling and all other doc features one thing that is not mentioned above is that lukas added support for a filename column for tab delimited input and output if percolator detects a column named filename in input it will carry over its content to output files we should make sure crux makes use of this
1
360,349
10,687,610,449
IssuesEvent
2019-10-22 16:28:14
AY1920S1-CS2113-T13-2/main
https://api.github.com/repos/AY1920S1-CS2113-T13-2/main
closed
`delete patient task` bug involving lack of output
priority.High type.Bug
Steps taken: 1. Started up Duke 2. Typed in `delete patient task #3 5` Received: <img width="434" alt="Screen Shot 2019-10-22 at 1 51 12 PM" src="https://user-images.githubusercontent.com/43768091/67261596-fbb91b80-f4d3-11e9-9598-a2fb51f387c0.png">
1.0
`delete patient task` bug involving lack of output - Steps taken: 1. Started up Duke 2. Typed in `delete patient task #3 5` Received: <img width="434" alt="Screen Shot 2019-10-22 at 1 51 12 PM" src="https://user-images.githubusercontent.com/43768091/67261596-fbb91b80-f4d3-11e9-9598-a2fb51f387c0.png">
priority
delete patient task bug involving lack of output steps taken started up duke typed in delete patient task received img width alt screen shot at pm src
1
361,845
10,720,629,207
IssuesEvent
2019-10-26 19:04:48
SIGBlockchain/project_aurum
https://api.github.com/repos/SIGBlockchain/project_aurum
opened
ContainsPublicKey and ContainsWalletAddress (2 funcs)
Level: 1 Priority: High Type: Enhancement
ContainsPublicKey Given a contract and `public.AurumPublicKey`, output true if: - the `Sender` field of the contract is equal to the `Key` field of the `AurumPublicKey` ContainsWalletAddress Given a contract and a `public.AurumPublicKey`, output true if: - the `RecipientPubKeyHash` field of the contract is equal to the `Hash` field of the `AurumPublicKey` For those that are new, the public key is the public, safe version of the private key, and the wallet address is the wallet address is simply another name for the hash of the public key.
1.0
ContainsPublicKey and ContainsWalletAddress (2 funcs) - ContainsPublicKey Given a contract and `public.AurumPublicKey`, output true if: - the `Sender` field of the contract is equal to the `Key` field of the `AurumPublicKey` ContainsWalletAddress Given a contract and a `public.AurumPublicKey`, output true if: - the `RecipientPubKeyHash` field of the contract is equal to the `Hash` field of the `AurumPublicKey` For those that are new, the public key is the public, safe version of the private key, and the wallet address is the wallet address is simply another name for the hash of the public key.
priority
containspublickey and containswalletaddress funcs containspublickey given a contract and public aurumpublickey output true if the sender field of the contract is equal to the key field of the aurumpublickey containswalletaddress given a contract and a public aurumpublickey output true if the recipientpubkeyhash field of the contract is equal to the hash field of the aurumpublickey for those that are new the public key is the public safe version of the private key and the wallet address is the wallet address is simply another name for the hash of the public key
1
742,291
25,848,295,825
IssuesEvent
2022-12-13 08:34:19
TencentBlueKing/bk-iam-saas
https://api.github.com/repos/TencentBlueKing/bk-iam-saas
opened
可用性: 当权限中心不可用如何快速切换/恢复?
Type: Enhancement Priority: High Layer: Product Type: Proposal
从调用方, 例如bk-job/蓝盾/bk-cmdb的角度出发, 使用了 SDK 或者没有用 SDK 此时, 权限中心如果不可用, 是否能有机制可以确保 对应产品使用 最后的快照数据保证服务正常? ------- 涉及 OpenAPI和鉴权 API
1.0
可用性: 当权限中心不可用如何快速切换/恢复? - 从调用方, 例如bk-job/蓝盾/bk-cmdb的角度出发, 使用了 SDK 或者没有用 SDK 此时, 权限中心如果不可用, 是否能有机制可以确保 对应产品使用 最后的快照数据保证服务正常? ------- 涉及 OpenAPI和鉴权 API
priority
可用性 当权限中心不可用如何快速切换 恢复 从调用方 例如bk job 蓝盾 bk cmdb的角度出发 使用了 sdk 或者没有用 sdk 此时 权限中心如果不可用 是否能有机制可以确保 对应产品使用 最后的快照数据保证服务正常 涉及 openapi和鉴权 api
1
799,713
28,312,529,923
IssuesEvent
2023-04-10 16:40:21
CrowdDotDev/crowd.dev
https://api.github.com/repos/CrowdDotDev/crowd.dev
closed
[C-220] Merge members after selecting them in list view
High priority Improvement
from a Discord member: "Hello all! It could be time-saving and useful to be able to merge contact, selecting them in a list and merging them through the "actions" button." **Solution** When two members are selected in the members list, display the action "merge members" in the dropdown. <sub>From [SyncLinear.com](https://synclinear.com) | [C-220](https://linear.app/crowddotdev/issue/C-220/merge-members-after-selecting-them-in-list-view)</sub>
1.0
[C-220] Merge members after selecting them in list view - from a Discord member: "Hello all! It could be time-saving and useful to be able to merge contact, selecting them in a list and merging them through the "actions" button." **Solution** When two members are selected in the members list, display the action "merge members" in the dropdown. <sub>From [SyncLinear.com](https://synclinear.com) | [C-220](https://linear.app/crowddotdev/issue/C-220/merge-members-after-selecting-them-in-list-view)</sub>
priority
merge members after selecting them in list view from a discord member hello all it could be time saving and useful to be able to merge contact selecting them in a list and merging them through the actions button solution when two members are selected in the members list display the action merge members in the dropdown from
1
49,891
3,004,795,122
IssuesEvent
2015-07-26 09:47:22
TechReborn/TechReborn
https://api.github.com/repos/TechReborn/TechReborn
closed
Not accepting EU or RF from Immersive Engineering Cables
bug High priority
Machines won't accept power from this mods cables which provides both EU and RF. An Assembling machine not accepting power (Tested to work with IC2 cables) ![doesntlikepowah 1](https://cloud.githubusercontent.com/assets/7843312/8890115/f17897f2-3337-11e5-9930-c5c8d283565c.png) ![doesntlikepowah 3](https://cloud.githubusercontent.com/assets/7843312/8890114/f1785454-3337-11e5-903b-69999b98f1b8.png) Other mods accepting EU or RF respectively, including Open Computers, Galacticraft, Forestry, Railcraft, Binnies mods, IC2 ![doesntlikepowah 2](https://cloud.githubusercontent.com/assets/7843312/8890113/f1782628-3337-11e5-88ed-6558ab26b565.png)
1.0
Not accepting EU or RF from Immersive Engineering Cables - Machines won't accept power from this mods cables which provides both EU and RF. An Assembling machine not accepting power (Tested to work with IC2 cables) ![doesntlikepowah 1](https://cloud.githubusercontent.com/assets/7843312/8890115/f17897f2-3337-11e5-9930-c5c8d283565c.png) ![doesntlikepowah 3](https://cloud.githubusercontent.com/assets/7843312/8890114/f1785454-3337-11e5-903b-69999b98f1b8.png) Other mods accepting EU or RF respectively, including Open Computers, Galacticraft, Forestry, Railcraft, Binnies mods, IC2 ![doesntlikepowah 2](https://cloud.githubusercontent.com/assets/7843312/8890113/f1782628-3337-11e5-88ed-6558ab26b565.png)
priority
not accepting eu or rf from immersive engineering cables machines won t accept power from this mods cables which provides both eu and rf an assembling machine not accepting power tested to work with cables other mods accepting eu or rf respectively including open computers galacticraft forestry railcraft binnies mods
1
449,137
12,963,922,554
IssuesEvent
2020-07-20 19:35:43
aces/cbrain
https://api.github.com/repos/aces/cbrain
opened
NeuroHub vs CBRAIN navigation
Bug Enhancement NeuroHub Priority: High Refactoring User Interface
Make the navigation between the two site more uniform and remove dead-ends (where a user navigates to CBRAIN but cannot go back) This is a vague issue, to be documented further.
1.0
NeuroHub vs CBRAIN navigation - Make the navigation between the two site more uniform and remove dead-ends (where a user navigates to CBRAIN but cannot go back) This is a vague issue, to be documented further.
priority
neurohub vs cbrain navigation make the navigation between the two site more uniform and remove dead ends where a user navigates to cbrain but cannot go back this is a vague issue to be documented further
1
599,711
18,281,109,926
IssuesEvent
2021-10-05 03:33:58
hrsh7th/nvim-compe
https://api.github.com/repos/hrsh7th/nvim-compe
closed
Replace text on complete
enhancement new-feature high-priority difficulty:high next-version
Hi there. Suppose that a completion is triggered in the middle of a word, and a completion item was selected. Can we add an option such that the inserted text will *replace* everything till the next space or trigger character? The current default is to insert the new completion item. 10x
1.0
Replace text on complete - Hi there. Suppose that a completion is triggered in the middle of a word, and a completion item was selected. Can we add an option such that the inserted text will *replace* everything till the next space or trigger character? The current default is to insert the new completion item. 10x
priority
replace text on complete hi there suppose that a completion is triggered in the middle of a word and a completion item was selected can we add an option such that the inserted text will replace everything till the next space or trigger character the current default is to insert the new completion item
1
33,912
2,773,472,667
IssuesEvent
2015-05-03 17:26:20
openpolis/open_municipio
https://api.github.com/repos/openpolis/open_municipio
closed
tab con incarichi in scheda persona
om-senigallia priority-high UI
nella vista della persona prevedere un selettore con tab tra l'intestazione (con dati sulla persona) e le sezioni seguenti (da "Presenze" in giu'). prevedere un tab per ciascun incarico istituzionale (per ora solo Consigliere e Assessore) ricoperto dalla persona. in futuro possiamo prevedere di estendere ad incarichi di tipo diverso (al momento non ci sono statistiche o dati da mostrare per incarichi di altro tipo). estendere la url della persona per prevedere (in fondo) un "selettore" della carica..
1.0
tab con incarichi in scheda persona - nella vista della persona prevedere un selettore con tab tra l'intestazione (con dati sulla persona) e le sezioni seguenti (da "Presenze" in giu'). prevedere un tab per ciascun incarico istituzionale (per ora solo Consigliere e Assessore) ricoperto dalla persona. in futuro possiamo prevedere di estendere ad incarichi di tipo diverso (al momento non ci sono statistiche o dati da mostrare per incarichi di altro tipo). estendere la url della persona per prevedere (in fondo) un "selettore" della carica..
priority
tab con incarichi in scheda persona nella vista della persona prevedere un selettore con tab tra l intestazione con dati sulla persona e le sezioni seguenti da presenze in giu prevedere un tab per ciascun incarico istituzionale per ora solo consigliere e assessore ricoperto dalla persona in futuro possiamo prevedere di estendere ad incarichi di tipo diverso al momento non ci sono statistiche o dati da mostrare per incarichi di altro tipo estendere la url della persona per prevedere in fondo un selettore della carica
1
402,824
11,825,317,740
IssuesEvent
2020-03-21 12:10:08
elcronos/COVID-19
https://api.github.com/repos/elcronos/COVID-19
opened
Create a lambda function for our server api
backend cloud high priority
Currently, we just have a server with the first version of the model ready to run using docker. We will conserve that code, but you will need to create a new folder and adapt that code to work using AWS Lambda
1.0
Create a lambda function for our server api - Currently, we just have a server with the first version of the model ready to run using docker. We will conserve that code, but you will need to create a new folder and adapt that code to work using AWS Lambda
priority
create a lambda function for our server api currently we just have a server with the first version of the model ready to run using docker we will conserve that code but you will need to create a new folder and adapt that code to work using aws lambda
1
461,174
13,224,653,357
IssuesEvent
2020-08-17 19:34:14
AuthGuard/AuthGuard
https://api.github.com/repos/AuthGuard/AuthGuard
closed
Error codes are null
bug high priority
In most cases, the returned errors contain a null error code ```json { "code": null, "message": "Failed to generate token" } ```
1.0
Error codes are null - In most cases, the returned errors contain a null error code ```json { "code": null, "message": "Failed to generate token" } ```
priority
error codes are null in most cases the returned errors contain a null error code json code null message failed to generate token
1
261,666
8,244,839,858
IssuesEvent
2018-09-11 07:51:13
assistify/Rocket.Chat
https://api.github.com/repos/assistify/Rocket.Chat
closed
Assistify-AI in all channels
Cmp: Threads Priority: High Type: New Feature
# Motivation Until now, we only activated the knowledge-base tab in requests and topics. Since this structuring conversations was not intensively used, many users of Assistify could not experience the added value of Smarti analyzing conversations. With #418, requests and topics are not needed anymore in order to chat with people who know. Consequently, we need to find a way to provide value in ongoing discussions. The first-step AI-part for Smarti has been described in https://github.com/redlink-gmbh/smarti/issues/273 # Requirements - [X] The Smarti-Widget shall be alive on a tab in all room types (except private rooms) - [X] Smarti shall receive all messages in public channels (type `c`) in order to be able to create a team's body of knowledge - [X] Consequently, the AI component has to be able to react on a room type being changed: The information of a room being either publically readable or whether it should not be returned from the knowledge base has to be respectable when returning results. - [X] Theremost not be any dependency towards requests and topic room types, events or properties anymore
1.0
Assistify-AI in all channels - # Motivation Until now, we only activated the knowledge-base tab in requests and topics. Since this structuring conversations was not intensively used, many users of Assistify could not experience the added value of Smarti analyzing conversations. With #418, requests and topics are not needed anymore in order to chat with people who know. Consequently, we need to find a way to provide value in ongoing discussions. The first-step AI-part for Smarti has been described in https://github.com/redlink-gmbh/smarti/issues/273 # Requirements - [X] The Smarti-Widget shall be alive on a tab in all room types (except private rooms) - [X] Smarti shall receive all messages in public channels (type `c`) in order to be able to create a team's body of knowledge - [X] Consequently, the AI component has to be able to react on a room type being changed: The information of a room being either publically readable or whether it should not be returned from the knowledge base has to be respectable when returning results. - [X] Theremost not be any dependency towards requests and topic room types, events or properties anymore
priority
assistify ai in all channels motivation until now we only activated the knowledge base tab in requests and topics since this structuring conversations was not intensively used many users of assistify could not experience the added value of smarti analyzing conversations with requests and topics are not needed anymore in order to chat with people who know consequently we need to find a way to provide value in ongoing discussions the first step ai part for smarti has been described in requirements the smarti widget shall be alive on a tab in all room types except private rooms smarti shall receive all messages in public channels type c in order to be able to create a team s body of knowledge consequently the ai component has to be able to react on a room type being changed the information of a room being either publically readable or whether it should not be returned from the knowledge base has to be respectable when returning results theremost not be any dependency towards requests and topic room types events or properties anymore
1
626,787
19,843,269,747
IssuesEvent
2022-01-21 01:12:46
oilshell/oil
https://api.github.com/repos/oilshell/oil
closed
spec-cpp tests: re-enable redirect case on C-I (hanging)
high-priority devtools c-i pending-release
I'm trying to update the stats, and it's very slow ... Looks like the redirect test is hanging... ``` andy@lenny:~/git/oilshell/oil$ pst -p 20160 bash,20160,andy test/spec-runner.sh all-parallel osh └─bash,20164 test/spec-runner.sh _all-parallel osh └─xargs,20174 -n 1 -P 1 -- test/spec-runner.sh run-cases └─bash,5762 test/spec-runner.sh run-cases redirect └─python2,5766 benchmarks/time_.py --tsv --time-fmt %.2f --output _tmp/spec/cpp/redirect.task.txt -- test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─time-helper,5767 -d \011 -o _tmp/spec/cpp/redirect.task.txt -x -e -- test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─bash,5768 test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─python2,5779 test/sh_spec.py --tmp-env /home/andy/git/oilshell/oil/test/../_tmp/spec-tmp/redirect.test.sh --path-env /home/andy/git/oilshell/oil/test/../spec/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/andy/bin --env-pair LOCALE_ARCHIVE= --env-pair ASAN_OPTIONS=detect_leaks=0 --env-pair REPO_ROOT=/home/andy/git/oilshell/oil/test/.. spec/redirect.test.sh --tsv-output _tmp/spec/cpp/redirect.tsv /home/andy/git/oilshell/oil/bin/osh /home/andy/git/oilshell/oil/bin/osh_eval /home/andy/git/oilshell/oil/_bin/osh_eval.dbg --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─osh_eval.dbg,6320 ```
1.0
spec-cpp tests: re-enable redirect case on C-I (hanging) - I'm trying to update the stats, and it's very slow ... Looks like the redirect test is hanging... ``` andy@lenny:~/git/oilshell/oil$ pst -p 20160 bash,20160,andy test/spec-runner.sh all-parallel osh └─bash,20164 test/spec-runner.sh _all-parallel osh └─xargs,20174 -n 1 -P 1 -- test/spec-runner.sh run-cases └─bash,5762 test/spec-runner.sh run-cases redirect └─python2,5766 benchmarks/time_.py --tsv --time-fmt %.2f --output _tmp/spec/cpp/redirect.task.txt -- test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─time-helper,5767 -d \011 -o _tmp/spec/cpp/redirect.task.txt -x -e -- test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─bash,5768 test/spec-cpp.sh run-with-osh-eval redirect --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─python2,5779 test/sh_spec.py --tmp-env /home/andy/git/oilshell/oil/test/../_tmp/spec-tmp/redirect.test.sh --path-env /home/andy/git/oilshell/oil/test/../spec/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/andy/bin --env-pair LOCALE_ARCHIVE= --env-pair ASAN_OPTIONS=detect_leaks=0 --env-pair REPO_ROOT=/home/andy/git/oilshell/oil/test/.. spec/redirect.test.sh --tsv-output _tmp/spec/cpp/redirect.tsv /home/andy/git/oilshell/oil/bin/osh /home/andy/git/oilshell/oil/bin/osh_eval /home/andy/git/oilshell/oil/_bin/osh_eval.dbg --format html --stats-file _tmp/spec/cpp/redirect.stats.txt --stats-template %(num_cases)d %(osh_num_passed)d %(osh_num_failed)d %(osh_failures_allowed)d %(osh_ALT_delta)d └─osh_eval.dbg,6320 ```
priority
spec cpp tests re enable redirect case on c i hanging i m trying to update the stats and it s very slow looks like the redirect test is hanging andy lenny git oilshell oil pst p bash andy test spec runner sh all parallel osh └─bash test spec runner sh all parallel osh └─xargs n p test spec runner sh run cases └─bash test spec runner sh run cases redirect └─ benchmarks time py tsv time fmt output tmp spec cpp redirect task txt test spec cpp sh run with osh eval redirect format html stats file tmp spec cpp redirect stats txt stats template num cases d osh num passed d osh num failed d osh failures allowed d osh alt delta d └─time helper d o tmp spec cpp redirect task txt x e test spec cpp sh run with osh eval redirect format html stats file tmp spec cpp redirect stats txt stats template num cases d osh num passed d osh num failed d osh failures allowed d osh alt delta d └─bash test spec cpp sh run with osh eval redirect format html stats file tmp spec cpp redirect stats txt stats template num cases d osh num passed d osh num failed d osh failures allowed d osh alt delta d └─ test sh spec py tmp env home andy git oilshell oil test tmp spec tmp redirect test sh path env home andy git oilshell oil test spec bin usr local sbin usr local bin usr sbin usr bin sbin bin usr games usr local games snap bin home andy bin env pair locale archive env pair asan options detect leaks env pair repo root home andy git oilshell oil test spec redirect test sh tsv output tmp spec cpp redirect tsv home andy git oilshell oil bin osh home andy git oilshell oil bin osh eval home andy git oilshell oil bin osh eval dbg format html stats file tmp spec cpp redirect stats txt stats template num cases d osh num passed d osh num failed d osh failures allowed d osh alt delta d └─osh eval dbg
1
742,858
25,873,779,022
IssuesEvent
2022-12-14 05:59:31
wso2/api-manager
https://api.github.com/repos/wso2/api-manager
opened
Error loading keys page after rejecting Application Key Generation Workflow item
Type/Bug Priority/High Component/APIM
### Description After rejecting the application key generation workflow approval item from the Admin portal, a blank page is shown when accessing the respective application keys or credentials page. <img width="1512" alt="Screen Shot 2022-12-14 at 11 26 15 AM" src="https://user-images.githubusercontent.com/8557410/207517828-b0f618f3-60e7-43e5-aff2-7fe27468e6d5.png"> ### Steps to Reproduce 1. Enable the Application Key Generation Workflow according to [1]. 2. Generate keys after creating an Application. 3. Now access the admin portal and reject the approval request. 4. Go back to the devportal application page and try to access the keys page as below. (I have rejected the sandbox request) <img width="1512" alt="Screen Shot 2022-12-14 at 11 28 19 AM" src="https://user-images.githubusercontent.com/8557410/207518122-5572d7a9-66b0-4453-a4f1-a72c2c82d4b1.png"> [1] https://apim.docs.wso2.com/en/latest/consume/manage-application/advanced-topics/adding-an-application-key-generation-workflow/ ### Affected Component APIM ### Version 4.2.0-SNAPSHOT ### Environment Details (with versions) _No response_ ### Relevant Log Output _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_
1.0
Error loading keys page after rejecting Application Key Generation Workflow item - ### Description After rejecting the application key generation workflow approval item from the Admin portal, a blank page is shown when accessing the respective application keys or credentials page. <img width="1512" alt="Screen Shot 2022-12-14 at 11 26 15 AM" src="https://user-images.githubusercontent.com/8557410/207517828-b0f618f3-60e7-43e5-aff2-7fe27468e6d5.png"> ### Steps to Reproduce 1. Enable the Application Key Generation Workflow according to [1]. 2. Generate keys after creating an Application. 3. Now access the admin portal and reject the approval request. 4. Go back to the devportal application page and try to access the keys page as below. (I have rejected the sandbox request) <img width="1512" alt="Screen Shot 2022-12-14 at 11 28 19 AM" src="https://user-images.githubusercontent.com/8557410/207518122-5572d7a9-66b0-4453-a4f1-a72c2c82d4b1.png"> [1] https://apim.docs.wso2.com/en/latest/consume/manage-application/advanced-topics/adding-an-application-key-generation-workflow/ ### Affected Component APIM ### Version 4.2.0-SNAPSHOT ### Environment Details (with versions) _No response_ ### Relevant Log Output _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_
priority
error loading keys page after rejecting application key generation workflow item description after rejecting the application key generation workflow approval item from the admin portal a blank page is shown when accessing the respective application keys or credentials page img width alt screen shot at am src steps to reproduce enable the application key generation workflow according to generate keys after creating an application now access the admin portal and reject the approval request go back to the devportal application page and try to access the keys page as below i have rejected the sandbox request img width alt screen shot at am src affected component apim version snapshot environment details with versions no response relevant log output no response related issues no response suggested labels no response
1
212,862
7,243,455,123
IssuesEvent
2018-02-14 11:48:01
carduque/FeerBoxClient
https://api.github.com/repos/carduque/FeerBoxClient
reopened
Piano buttons
High Priority Type: Enhancement
Fer que si s'apreten tots els butons consecutivament no es processin, m'explico, ideal per fer test que funcionen tots els butons
1.0
Piano buttons - Fer que si s'apreten tots els butons consecutivament no es processin, m'explico, ideal per fer test que funcionen tots els butons
priority
piano buttons fer que si s apreten tots els butons consecutivament no es processin m explico ideal per fer test que funcionen tots els butons
1
830,605
32,018,799,326
IssuesEvent
2023-09-22 01:29:52
steedos/steedos-platform
https://api.github.com/repos/steedos/steedos-platform
closed
[Bug]: 微页面-页面分配的移动端/桌面端配置不生效
bug done priority: High
### Description <img width="1208" alt="image" src="https://github.com/steedos/steedos-platform/assets/26241897/e91dd3c3-de0a-49f7-aac7-a547e27add2d"> ### Steps To Reproduce 重现步骤 配置微页面的页面分配 ### Version 版本 2.5.13-beta.21
1.0
[Bug]: 微页面-页面分配的移动端/桌面端配置不生效 - ### Description <img width="1208" alt="image" src="https://github.com/steedos/steedos-platform/assets/26241897/e91dd3c3-de0a-49f7-aac7-a547e27add2d"> ### Steps To Reproduce 重现步骤 配置微页面的页面分配 ### Version 版本 2.5.13-beta.21
priority
微页面 页面分配的移动端 桌面端配置不生效 description img width alt image src steps to reproduce 重现步骤 配置微页面的页面分配 version 版本 beta
1
741,040
25,777,765,718
IssuesEvent
2022-12-09 13:27:00
bounswe/bounswe2022group4
https://api.github.com/repos/bounswe/bounswe2022group4
closed
Frontend: Delete Post Backend Connection
Category - To Do Priority - High whom: individual Difficulty - Hard Language - React.js Team - Frontend
I have created Post structure before with delete button. Now i need to connect it to backend, in order to make delete button functional. Also, user can only delete its own posts. Steps: 1) Add backend end point for delete post in frontend structure as a async function. 2) Use authentication token for the delete post request 3) Use useEffect hook in order to re-render page in order to view updated Post structure Reviewer: @BeratDamar Deadline: 04.12.2022 23.59
1.0
Frontend: Delete Post Backend Connection - I have created Post structure before with delete button. Now i need to connect it to backend, in order to make delete button functional. Also, user can only delete its own posts. Steps: 1) Add backend end point for delete post in frontend structure as a async function. 2) Use authentication token for the delete post request 3) Use useEffect hook in order to re-render page in order to view updated Post structure Reviewer: @BeratDamar Deadline: 04.12.2022 23.59
priority
frontend delete post backend connection i have created post structure before with delete button now i need to connect it to backend in order to make delete button functional also user can only delete its own posts steps add backend end point for delete post in frontend structure as a async function use authentication token for the delete post request use useeffect hook in order to re render page in order to view updated post structure reviewer beratdamar deadline
1
85,300
3,689,197,098
IssuesEvent
2016-02-25 15:42:08
GRIS-UdeM/Octogris
https://api.github.com/repos/GRIS-UdeM/Octogris
closed
Octogris 3: Random Target: One Way et Return sont inversés
bug high priority
Dans la Trajectoire Random Target, les options One Way et Return sont inversées. One Way fonctionne de façon continue, alors que Return «reset» la trajectoire à la fin de chaque cycle. C'est l'inverse de ce que l'on trouve ailleurs.
1.0
Octogris 3: Random Target: One Way et Return sont inversés - Dans la Trajectoire Random Target, les options One Way et Return sont inversées. One Way fonctionne de façon continue, alors que Return «reset» la trajectoire à la fin de chaque cycle. C'est l'inverse de ce que l'on trouve ailleurs.
priority
octogris random target one way et return sont inversés dans la trajectoire random target les options one way et return sont inversées one way fonctionne de façon continue alors que return «reset» la trajectoire à la fin de chaque cycle c est l inverse de ce que l on trouve ailleurs
1
720,607
24,798,591,903
IssuesEvent
2022-10-24 19:32:44
dogukanarslan/software-development-practice
https://api.github.com/repos/dogukanarslan/software-development-practice
closed
Generate mockup
priority:medium research effort:high
Create an initial mockup to have a better understanding of the upcoming project.
1.0
Generate mockup - Create an initial mockup to have a better understanding of the upcoming project.
priority
generate mockup create an initial mockup to have a better understanding of the upcoming project
1
307,690
9,420,614,335
IssuesEvent
2019-04-11 03:07:06
DIYgod/RSSHub
https://api.github.com/repos/DIYgod/RSSHub
closed
redis 内存碎片率高优化
bug help wanted high priority
### 实际发生了什么? ![](https://i.loli.net/2019/03/27/5c9adfa6f13fa.jpg) 上图为 demo 所用的 redis 运行 7.5 小时后的内存占用情况,mem_fragmentation_ratio = used_memory_rss / used_memory = 1.5,且 used_memory_rss 会持续升高,最后会吃光 vps 全部内存 ### 部署相关信息 | Env | Value | | ------------------ | ------------- | | OS | ubuntu 16.04.10 | | Redis version | 5.0.4 | ### 原因排查 资料说内存碎片率高是对数据的更改频繁,而且数据之间的大小相差很大导致的 需要排查下具体是哪些 key 导致的 ### 解决方案 已尝试开启 activedefrag,无效 设置 maxmemory 也无效,maxmemory 只会控制 used_memory 的值 临时解决方案是定时重启 redis ![](https://i.loli.net/2019/03/27/5c9aef54d2781.jpg)
1.0
redis 内存碎片率高优化 - ### 实际发生了什么? ![](https://i.loli.net/2019/03/27/5c9adfa6f13fa.jpg) 上图为 demo 所用的 redis 运行 7.5 小时后的内存占用情况,mem_fragmentation_ratio = used_memory_rss / used_memory = 1.5,且 used_memory_rss 会持续升高,最后会吃光 vps 全部内存 ### 部署相关信息 | Env | Value | | ------------------ | ------------- | | OS | ubuntu 16.04.10 | | Redis version | 5.0.4 | ### 原因排查 资料说内存碎片率高是对数据的更改频繁,而且数据之间的大小相差很大导致的 需要排查下具体是哪些 key 导致的 ### 解决方案 已尝试开启 activedefrag,无效 设置 maxmemory 也无效,maxmemory 只会控制 used_memory 的值 临时解决方案是定时重启 redis ![](https://i.loli.net/2019/03/27/5c9aef54d2781.jpg)
priority
redis 内存碎片率高优化 实际发生了什么? 上图为 demo 所用的 redis 运行 小时后的内存占用情况,mem fragmentation ratio used memory rss used memory ,且 used memory rss 会持续升高,最后会吃光 vps 全部内存 部署相关信息 env value os ubuntu redis version 原因排查 资料说内存碎片率高是对数据的更改频繁,而且数据之间的大小相差很大导致的 需要排查下具体是哪些 key 导致的 解决方案 已尝试开启 activedefrag,无效 设置 maxmemory 也无效,maxmemory 只会控制 used memory 的值 临时解决方案是定时重启 redis
1
678,183
23,190,606,835
IssuesEvent
2022-08-01 12:20:16
SAP/xsk
https://api.github.com/repos/SAP/xsk
closed
[Migration] Synonym name generation for same table names in different schemas
bug wontfix priority-medium effort-high tooling customer investigation / discussion _Subs_ incomplete
### Background Let's have the following setup: * SCHEMA1 with TABLE_A * SCHEMA2 with TABLE_A * CALC_VIEW1 with `DataSource id="TABLE_A"` and `schemaName="SCHEMA1" columnObjectName="TABLE_A"` * CALC_VIEW2 with `DataSource id="TABLE_A"` and `schemaName="SCHEMA2" columnObjectName="TABLE_A"` then the migration process will change the DataSources for both calcivews to only `resourceUri=TABLE_A` and the synonyms that need to be generated will look like the following ```json { "TABLE_A": { "target": { "object": "TABLE_A", "schema": "SCHEMA1" } }, "TABLE_A": { "target": { "object": "TABLE_A", "schema": "SCHEMA2" } }, } ``` which is invalid and cannot be deployed as it will fail with error saying `duplicate key; the key is already defined at location`. ### Target Choose a new naming pattern for synonyms and resourceUri transformation in calculation views
1.0
[Migration] Synonym name generation for same table names in different schemas - ### Background Let's have the following setup: * SCHEMA1 with TABLE_A * SCHEMA2 with TABLE_A * CALC_VIEW1 with `DataSource id="TABLE_A"` and `schemaName="SCHEMA1" columnObjectName="TABLE_A"` * CALC_VIEW2 with `DataSource id="TABLE_A"` and `schemaName="SCHEMA2" columnObjectName="TABLE_A"` then the migration process will change the DataSources for both calcivews to only `resourceUri=TABLE_A` and the synonyms that need to be generated will look like the following ```json { "TABLE_A": { "target": { "object": "TABLE_A", "schema": "SCHEMA1" } }, "TABLE_A": { "target": { "object": "TABLE_A", "schema": "SCHEMA2" } }, } ``` which is invalid and cannot be deployed as it will fail with error saying `duplicate key; the key is already defined at location`. ### Target Choose a new naming pattern for synonyms and resourceUri transformation in calculation views
priority
synonym name generation for same table names in different schemas background let s have the following setup with table a with table a calc with datasource id table a and schemaname columnobjectname table a calc with datasource id table a and schemaname columnobjectname table a then the migration process will change the datasources for both calcivews to only resourceuri table a and the synonyms that need to be generated will look like the following json table a target object table a schema table a target object table a schema which is invalid and cannot be deployed as it will fail with error saying duplicate key the key is already defined at location target choose a new naming pattern for synonyms and resourceuri transformation in calculation views
1
522,379
15,158,849,446
IssuesEvent
2021-02-12 02:21:04
3rdIteration/btcrecover
https://api.github.com/repos/3rdIteration/btcrecover
closed
Expand/Update BIP39 passphrase support (To work with extra wallet types)
High Priority Should Be Fixed enhancement
Basically all the wallet types supported in seed recover. Plus add relevant unit tests.
1.0
Expand/Update BIP39 passphrase support (To work with extra wallet types) - Basically all the wallet types supported in seed recover. Plus add relevant unit tests.
priority
expand update passphrase support to work with extra wallet types basically all the wallet types supported in seed recover plus add relevant unit tests
1
290,380
8,894,092,089
IssuesEvent
2019-01-16 02:22:48
Automattic/VIP-Coding-Standards
https://api.github.com/repos/Automattic/VIP-Coding-Standards
opened
Hooks logic fails for short array syntax
Priority: High Type: Bug
Changing to short array syntax in `Tests/Hooks/AlwaysReturnInFilterUnitTests.php` and `Tests/Hooks/PreGetPostsUnitTests.php` result in those tests failing. It suggests that some of the processing logic in each sniff is flawed for short array syntaxes. Both long and short array syntax should be retained in the unit tests. ## Environment Use `php -v` and `composer show` to get versions. | Question | Answer | ------------------------| ------- | PHP version | 7.2.0 | PHP_CodeSniffer version | 3.4.0 | VIPCS version | `master` ## Tested Against `master` branch? - [x] I have verified the issue still exists in the `master` branch of VIPCS.
1.0
Hooks logic fails for short array syntax - Changing to short array syntax in `Tests/Hooks/AlwaysReturnInFilterUnitTests.php` and `Tests/Hooks/PreGetPostsUnitTests.php` result in those tests failing. It suggests that some of the processing logic in each sniff is flawed for short array syntaxes. Both long and short array syntax should be retained in the unit tests. ## Environment Use `php -v` and `composer show` to get versions. | Question | Answer | ------------------------| ------- | PHP version | 7.2.0 | PHP_CodeSniffer version | 3.4.0 | VIPCS version | `master` ## Tested Against `master` branch? - [x] I have verified the issue still exists in the `master` branch of VIPCS.
priority
hooks logic fails for short array syntax changing to short array syntax in tests hooks alwaysreturninfilterunittests php and tests hooks pregetpostsunittests php result in those tests failing it suggests that some of the processing logic in each sniff is flawed for short array syntaxes both long and short array syntax should be retained in the unit tests environment use php v and composer show to get versions question answer php version php codesniffer version vipcs version master tested against master branch i have verified the issue still exists in the master branch of vipcs
1
126,711
5,002,709,331
IssuesEvent
2016-12-11 14:57:30
openaq/openaq-fetch
https://api.github.com/repos/openaq/openaq-fetch
closed
Skopje, Macedonia - Data Sources
help wanted high priority new data
Hourly data for each pollutant, using the following configuration shown in this attached image: ![screen shot 2015-08-30 at 9 31 52 pm](https://cloud.githubusercontent.com/assets/13404290/9570641/cbb08fbe-4f5e-11e5-8677-86a13ec6b358.png)
1.0
Skopje, Macedonia - Data Sources - Hourly data for each pollutant, using the following configuration shown in this attached image: ![screen shot 2015-08-30 at 9 31 52 pm](https://cloud.githubusercontent.com/assets/13404290/9570641/cbb08fbe-4f5e-11e5-8677-86a13ec6b358.png)
priority
skopje macedonia data sources hourly data for each pollutant using the following configuration shown in this attached image
1
681,333
23,305,662,507
IssuesEvent
2022-08-08 00:16:50
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
improve warning/error handling in data-context
unification jira-migration fast-follows-2 priority: high stage: review
## **Summary** The file watching & need to explicitly retry on error is intentional but can appear inconsistent. Ensure we have clear internal/external documentation on the stages of the flow here. Error and warning is also confusing and unclear - there's many different fields that can be assigned to. Re: feedback from [~accountid:615b6c0199b4b8006a9e53b2] in bug hunt: - - - Basically, The experience is inconsistent. I'm getting sent three conflicting signals: * Some changes to the config file are noticed immediately. So clearly the file in being watched. * But sometimes I have to click a button to get changes noticed. Why is the watch not noticing solutions as well as problems? [https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519|smart-card) ## **Acceptance Criteria** * Should… * Should also… ### **Resources** Any Notion documents, Google documents, Figma Boards ### **Open Pull Requests** Any PRs needed for review ┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1502) by [Unito](https://www.unito.io) ┆author: Tim Griesser ┆friendlyId: UNIFY-1502 ┆priority: High ┆sprint: Fast Follows 2 ┆taskType: Task
1.0
improve warning/error handling in data-context - ## **Summary** The file watching & need to explicitly retry on error is intentional but can appear inconsistent. Ensure we have clear internal/external documentation on the stages of the flow here. Error and warning is also confusing and unclear - there's many different fields that can be assigned to. Re: feedback from [~accountid:615b6c0199b4b8006a9e53b2] in bug hunt: - - - Basically, The experience is inconsistent. I'm getting sent three conflicting signals: * Some changes to the config file are noticed immediately. So clearly the file in being watched. * But sometimes I have to click a button to get changes noticed. Why is the watch not noticing solutions as well as problems? [https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519](https://cypressio.slack.com/archives/C02MYBT9Y5S/p1649099346926519|smart-card) ## **Acceptance Criteria** * Should… * Should also… ### **Resources** Any Notion documents, Google documents, Figma Boards ### **Open Pull Requests** Any PRs needed for review ┆Issue is synchronized with this [Jira Task](https://cypress-io.atlassian.net/browse/UNIFY-1502) by [Unito](https://www.unito.io) ┆author: Tim Griesser ┆friendlyId: UNIFY-1502 ┆priority: High ┆sprint: Fast Follows 2 ┆taskType: Task
priority
improve warning error handling in data context summary the file watching need to explicitly retry on error is intentional but can appear inconsistent ensure we have clear internal external documentation on the stages of the flow here error and warning is also confusing and unclear there s many different fields that can be assigned to re feedback from in bug hunt basically the experience is inconsistent i m getting sent three conflicting signals some changes to the config file are noticed immediately so clearly the file in being watched but sometimes i have to click a button to get changes noticed why is the watch not noticing solutions as well as problems acceptance criteria should… should also… resources any notion documents google documents figma boards open pull requests any prs needed for review ┆issue is synchronized with this by ┆author tim griesser ┆friendlyid unify ┆priority high ┆sprint fast follows ┆tasktype task
1
590,341
17,776,536,140
IssuesEvent
2021-08-30 20:01:08
ArctosDB/arctos
https://api.github.com/repos/ArctosDB/arctos
closed
Media loading somewhat largeish image files into Arctos
Priority-High (Needed for work) Function-Media Enhancement Help wanted Grant funded (Arctos) Tool - Image Uploader
Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html **Is your feature request related to a problem? Please describe.** Jpegs produced from DSLRs and exported at 300 dpi are approximately 5-12 mb each and are somewhat large - could cause problems for users of lower quality connectivity. **Describe what you're trying to accomplish** Create solution for viewing high resolution images uploaded to Arctos or at separate allocation **Describe the solution you'd like** Find solution for large size images on Arctos or transformation of images uploaded to arctos **Describe alternatives you've considered** Reexport files at a lower dpi Set up guidelines for image size being uploaded to Arctos Create medium size image option in addition to high resolution image option (This is what TORCH TCN is doing with all images (thumbnail, medium, high)) **Additional context** Preparing TACC allocations for uploading a large amount (~34,000) images Zhuang, Mingna Thu 1/7/2021 2:09 PM To: Dusty <dustymc@gmail.com>; ​​I could export them at a lower res. I accidentally did that a little while ago and they ended up being 1.5-2 mb each. They seemed ok, but can't get a ton of detail when zoomed in. will file issue though describing the size. So should I go ahead with Chris in the meantime for a separate allocation? or still need discussion? From: Dusty <dustymc@gmail.com> Sent: Thursday, January 7, 2021 2:01 PM To: Zhuang, Mingna Subject: Re: inquiry about TACC allocations for an Arctos-member project at UTEP Hmm, that sorta seems "huge." Definitely won't break anything, but wouldn't be much fun for someone on a dicey connection either. Maybe an Issue if you don't mind - this sort of thing is going to become routine if we transition to the s3 loader, now's a good time to start looking for a solution. On Thu, Jan 7, 2021 at 12:56 PM Zhuang, Mingna <mzhuang@utep.edu> wrote: ​On average, they're 5-12 MB, although mostly 7-8 mb for jpegs and then about 25-27 mb for the RAWs From: Dusty <dustymc@gmail.com> Sent: Thursday, January 7, 2021 1:53 PM To: Zhuang, Mingna Cc: Christopher Jordan Subject: Re: inquiry about TACC allocations for an Arctos-member project at UTEP How big are the JPGs? If they're not huge (whatever that means...) then they're probably OK as-is. On Thu, Jan 7, 2021 at 12:46 PM Zhuang, Mingna <mzhuang@utep.edu> wrote: ​Hi Chris, Sorry about the delay in response. Thanks for the information. I don't think we need image conversion and will just be uploading images to link to Arctos (unless there is something else I should consider Dusty?). If we do get a separate allocation, would we have to renew the allocation every year? Thanks, Vicky **Priority** Please assign a priority-label. Unprioritized issues gets sent into a black hole of despair.
1.0
Media loading somewhat largeish image files into Arctos - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html **Is your feature request related to a problem? Please describe.** Jpegs produced from DSLRs and exported at 300 dpi are approximately 5-12 mb each and are somewhat large - could cause problems for users of lower quality connectivity. **Describe what you're trying to accomplish** Create solution for viewing high resolution images uploaded to Arctos or at separate allocation **Describe the solution you'd like** Find solution for large size images on Arctos or transformation of images uploaded to arctos **Describe alternatives you've considered** Reexport files at a lower dpi Set up guidelines for image size being uploaded to Arctos Create medium size image option in addition to high resolution image option (This is what TORCH TCN is doing with all images (thumbnail, medium, high)) **Additional context** Preparing TACC allocations for uploading a large amount (~34,000) images Zhuang, Mingna Thu 1/7/2021 2:09 PM To: Dusty <dustymc@gmail.com>; ​​I could export them at a lower res. I accidentally did that a little while ago and they ended up being 1.5-2 mb each. They seemed ok, but can't get a ton of detail when zoomed in. will file issue though describing the size. So should I go ahead with Chris in the meantime for a separate allocation? or still need discussion? From: Dusty <dustymc@gmail.com> Sent: Thursday, January 7, 2021 2:01 PM To: Zhuang, Mingna Subject: Re: inquiry about TACC allocations for an Arctos-member project at UTEP Hmm, that sorta seems "huge." Definitely won't break anything, but wouldn't be much fun for someone on a dicey connection either. Maybe an Issue if you don't mind - this sort of thing is going to become routine if we transition to the s3 loader, now's a good time to start looking for a solution. On Thu, Jan 7, 2021 at 12:56 PM Zhuang, Mingna <mzhuang@utep.edu> wrote: ​On average, they're 5-12 MB, although mostly 7-8 mb for jpegs and then about 25-27 mb for the RAWs From: Dusty <dustymc@gmail.com> Sent: Thursday, January 7, 2021 1:53 PM To: Zhuang, Mingna Cc: Christopher Jordan Subject: Re: inquiry about TACC allocations for an Arctos-member project at UTEP How big are the JPGs? If they're not huge (whatever that means...) then they're probably OK as-is. On Thu, Jan 7, 2021 at 12:46 PM Zhuang, Mingna <mzhuang@utep.edu> wrote: ​Hi Chris, Sorry about the delay in response. Thanks for the information. I don't think we need image conversion and will just be uploading images to link to Arctos (unless there is something else I should consider Dusty?). If we do get a separate allocation, would we have to renew the allocation every year? Thanks, Vicky **Priority** Please assign a priority-label. Unprioritized issues gets sent into a black hole of despair.
priority
media loading somewhat largeish image files into arctos issue documentation is is your feature request related to a problem please describe jpegs produced from dslrs and exported at dpi are approximately mb each and are somewhat large could cause problems for users of lower quality connectivity describe what you re trying to accomplish create solution for viewing high resolution images uploaded to arctos or at separate allocation describe the solution you d like find solution for large size images on arctos or transformation of images uploaded to arctos describe alternatives you ve considered reexport files at a lower dpi set up guidelines for image size being uploaded to arctos create medium size image option in addition to high resolution image option this is what torch tcn is doing with all images thumbnail medium high additional context preparing tacc allocations for uploading a large amount images zhuang mingna thu pm to dusty ​​i could export them at a lower res i accidentally did that a little while ago and they ended up being mb each they seemed ok but can t get a ton of detail when zoomed in will file issue though describing the size so should i go ahead with chris in the meantime for a separate allocation or still need discussion from dusty sent thursday january pm to zhuang mingna subject re inquiry about tacc allocations for an arctos member project at utep hmm that sorta seems huge definitely won t break anything but wouldn t be much fun for someone on a dicey connection either maybe an issue if you don t mind this sort of thing is going to become routine if we transition to the loader now s a good time to start looking for a solution on thu jan at pm zhuang mingna wrote ​on average they re mb although mostly mb for jpegs and then about mb for the raws from dusty sent thursday january pm to zhuang mingna cc christopher jordan subject re inquiry about tacc allocations for an arctos member project at utep how big are the jpgs if they re not huge whatever that means then they re probably ok as is on thu jan at pm zhuang mingna wrote ​hi chris sorry about the delay in response thanks for the information i don t think we need image conversion and will just be uploading images to link to arctos unless there is something else i should consider dusty if we do get a separate allocation would we have to renew the allocation every year thanks vicky priority please assign a priority label unprioritized issues gets sent into a black hole of despair
1
670,484
22,691,193,293
IssuesEvent
2022-07-04 20:36:18
vertica/spark-connector
https://api.github.com/repos/vertica/spark-connector
closed
[Json Read Support] Infer complex type schema on read
enhancement size: 3 High Priority
## Description To support reading complex types with JSON, nested arrays and struct schema needs to be inferred on reads.
1.0
[Json Read Support] Infer complex type schema on read - ## Description To support reading complex types with JSON, nested arrays and struct schema needs to be inferred on reads.
priority
infer complex type schema on read description to support reading complex types with json nested arrays and struct schema needs to be inferred on reads
1
187,836
6,761,549,268
IssuesEvent
2017-10-25 02:29:56
paleobiodb/classic
https://api.github.com/repos/paleobiodb/classic
closed
Bad link in Reference summary
bug: high priority
See the attached image below. When you click on 4 taxonomic opinions, it should show a clickable list of those 4 opinions. Instead, it shows the 2 taxa entered from this paper. This link needs to be updated. ![meng](https://user-images.githubusercontent.com/19557452/29432796-e8ab8bd6-836a-11e7-8fb7-f1967c95d867.jpeg)
1.0
Bad link in Reference summary - See the attached image below. When you click on 4 taxonomic opinions, it should show a clickable list of those 4 opinions. Instead, it shows the 2 taxa entered from this paper. This link needs to be updated. ![meng](https://user-images.githubusercontent.com/19557452/29432796-e8ab8bd6-836a-11e7-8fb7-f1967c95d867.jpeg)
priority
bad link in reference summary see the attached image below when you click on taxonomic opinions it should show a clickable list of those opinions instead it shows the taxa entered from this paper this link needs to be updated
1
506,015
14,656,564,482
IssuesEvent
2020-12-28 13:42:26
Edgeryders-Participio/realities
https://api.github.com/repos/Edgeryders-Participio/realities
opened
"View details" link for realities from graph view is broken
Priority: High bug ui
Not adjusted to the new url format lol
1.0
"View details" link for realities from graph view is broken - Not adjusted to the new url format lol
priority
view details link for realities from graph view is broken not adjusted to the new url format lol
1
68,344
3,286,531,945
IssuesEvent
2015-10-29 03:28:41
cs2103aug2015-t11-2j/main
https://api.github.com/repos/cs2103aug2015-t11-2j/main
opened
URGENT: Parser Event rewrite
priority.high type.bug
Parser cannot account for tasks that require directions, e.g. "add take bus from Clementi to Orchard"
1.0
URGENT: Parser Event rewrite - Parser cannot account for tasks that require directions, e.g. "add take bus from Clementi to Orchard"
priority
urgent parser event rewrite parser cannot account for tasks that require directions e g add take bus from clementi to orchard
1
426,485
12,373,083,416
IssuesEvent
2020-05-18 21:42:42
returntocorp/semgrep
https://api.github.com/repos/returntocorp/semgrep
closed
Either change or add option to make check IDs independent of config location
priority:high
Right now, check IDs depend on the location of the config. E.g., using the config `r/r2c` from semgrep.live, vs. downloading and running that same config from a local directory "dir", yield two different check ids: `r2c.python....` vs. `dir.r2c.python...` This breaks archiving with the semgrep action, as the archive can not easily be run locally. The ask here is to either: 1. Make check IDs a constant function of config location 2. Add an option to make them so
1.0
Either change or add option to make check IDs independent of config location - Right now, check IDs depend on the location of the config. E.g., using the config `r/r2c` from semgrep.live, vs. downloading and running that same config from a local directory "dir", yield two different check ids: `r2c.python....` vs. `dir.r2c.python...` This breaks archiving with the semgrep action, as the archive can not easily be run locally. The ask here is to either: 1. Make check IDs a constant function of config location 2. Add an option to make them so
priority
either change or add option to make check ids independent of config location right now check ids depend on the location of the config e g using the config r from semgrep live vs downloading and running that same config from a local directory dir yield two different check ids python vs dir python this breaks archiving with the semgrep action as the archive can not easily be run locally the ask here is to either make check ids a constant function of config location add an option to make them so
1
211,409
7,200,885,226
IssuesEvent
2018-02-05 20:35:29
ESCOMP/ctsm
https://api.github.com/repos/ESCOMP/ctsm
closed
Clm40 1850 and HIST compsets are incorrect
priority: high type: bug
This is with latest CLM clm4_5_18_r274. Two of the Clm40 compsets are wrong. Hence, their testing is also incorrect. ``` diff --git a/cime_config/config_compsets.xml b/cime_config/config_compsets.xml index 5a633da..fbed222 100644 --- a/cime_config/config_compsets.xml +++ b/cime_config/config_compsets.xml @@ -274,12 +274,12 @@ <compset> <alias>I1850Clm40SpCruGs</alias> - <lname>2000_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> + <lname>1850_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> </compset> <compset> <alias>IHistClm40SpCruGs</alias> - <lname>2000_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> + <lname>HIST_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> </compset> ``` This goes back to clm4_5_16_r249 with the reworked compsets and testlist.
1.0
Clm40 1850 and HIST compsets are incorrect - This is with latest CLM clm4_5_18_r274. Two of the Clm40 compsets are wrong. Hence, their testing is also incorrect. ``` diff --git a/cime_config/config_compsets.xml b/cime_config/config_compsets.xml index 5a633da..fbed222 100644 --- a/cime_config/config_compsets.xml +++ b/cime_config/config_compsets.xml @@ -274,12 +274,12 @@ <compset> <alias>I1850Clm40SpCruGs</alias> - <lname>2000_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> + <lname>1850_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> </compset> <compset> <alias>IHistClm40SpCruGs</alias> - <lname>2000_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> + <lname>HIST_DATM%CRU_CLM40%SP_SICE_SOCN_RTM_SGLC_SWAV</lname> </compset> ``` This goes back to clm4_5_16_r249 with the reworked compsets and testlist.
priority
and hist compsets are incorrect this is with latest clm two of the compsets are wrong hence their testing is also incorrect diff git a cime config config compsets xml b cime config config compsets xml index a cime config config compsets xml b cime config config compsets xml datm cru sp sice socn rtm sglc swav datm cru sp sice socn rtm sglc swav datm cru sp sice socn rtm sglc swav hist datm cru sp sice socn rtm sglc swav this goes back to with the reworked compsets and testlist
1
698,270
23,972,037,084
IssuesEvent
2022-09-13 08:33:55
younginnovations/iatipublisher
https://api.github.com/repos/younginnovations/iatipublisher
closed
#57 Bug:Activity>>Document-link issue
type: bug priority: high Frontend
Context - Desktop - Chrome 102.0.5005.61 Precondition - https://stage.iatipublisher.yipl.com.np/ - Username: Ram - Password: 12345678 - for created activity **Issue 1 : When a user attempts to upload more than one URL, an error occurs.** steps Document link >> click add more document link enter more than one URL in edit form. Actual Result - An error occurs Excepted Result - Error shouldn't occur. https://user-images.githubusercontent.com/78422663/180162682-f3895983-5ed4-4700-a04d-59007932b900.mp4 **Issue 2: Inappropriate attribute name** Actual Result ![Screenshot from 2022-07-08 11-40-47](https://user-images.githubusercontent.com/78422663/177926328-1bbdd6a4-3038-493a-8227-ddb2f675eeb0.png) Excepted Result - Attribute Name should be iso-date. - please refer to the IATI document(https://iatistandard.org/en/iati-standard/203/activity-standard/iati-activities/iati-activity/document-link/document-date/) **Issue 3: Repeating the same information** Steps - Document-link >> code - Select the same drop-down option multiple times Actual Result - Page gets saved Excepted Result - A proper validation should be in the form https://user-images.githubusercontent.com/78422663/177928117-05e6e68f-5792-4121-a3b7-755b9a018cb7.mp4 **Issue 4: Document attribute is not available in IATI Standard Document** Steps - Document-link >> document Actual Result ![Screenshot from 2022-07-08 11-59-31](https://user-images.githubusercontent.com/78422663/177928669-b9e81689-5c54-43ae-89b4-793f5748aa18.png) Excepted Result -Please follow the IATI standard documentation( https://iatistandard.org/en/iati-standard/203/activity-standard/iati-activities/iati-activity/document-link/ ![Screenshot from 2022-07-08 12-00-24](https://user-images.githubusercontent.com/78422663/177928911-7f69f6b7-c5a4-4e9e-a9a7-b5e3fca3c3b5.png) **Issue 5:Cannot edit the URL attributes** Steps - Document-link >> edit Actual Result - cannot delete the URL and save it. Excepted Result -Edit functionality should work properly. https://user-images.githubusercontent.com/78422663/177962161-ffb8ccf9-e356-47c7-8d28-99c5d071f4cb.mp4 **Issue 6:Tooltip is missing from the attribute in the edit form** Actual Result ![Screenshot from 2022-07-08 15-15-58](https://user-images.githubusercontent.com/78422663/177963167-35823272-c1c7-4628-a3ce-320c52997cc2.png) Expected Result - The tooltip should be present in the edit form. **Issue 7: The tooltip doesn't provide appropriate information** Actual Result https://user-images.githubusercontent.com/78422663/177965530-5e18bc03-7032-472c-b0a4-ce2862e7f765.mp4 Expected Result - A tooltip should provide appropriate information. **Issue 8: An unwanted message for user ** Steps - Document-link >> click save button Actual Result -Unwanted message is displayed Excepted Result -This message should be hidden. https://user-images.githubusercontent.com/78422663/178209794-89dfca84-4ae9-4707-8996-d780e77b650a.mp4
1.0
#57 Bug:Activity>>Document-link issue - Context - Desktop - Chrome 102.0.5005.61 Precondition - https://stage.iatipublisher.yipl.com.np/ - Username: Ram - Password: 12345678 - for created activity **Issue 1 : When a user attempts to upload more than one URL, an error occurs.** steps Document link >> click add more document link enter more than one URL in edit form. Actual Result - An error occurs Excepted Result - Error shouldn't occur. https://user-images.githubusercontent.com/78422663/180162682-f3895983-5ed4-4700-a04d-59007932b900.mp4 **Issue 2: Inappropriate attribute name** Actual Result ![Screenshot from 2022-07-08 11-40-47](https://user-images.githubusercontent.com/78422663/177926328-1bbdd6a4-3038-493a-8227-ddb2f675eeb0.png) Excepted Result - Attribute Name should be iso-date. - please refer to the IATI document(https://iatistandard.org/en/iati-standard/203/activity-standard/iati-activities/iati-activity/document-link/document-date/) **Issue 3: Repeating the same information** Steps - Document-link >> code - Select the same drop-down option multiple times Actual Result - Page gets saved Excepted Result - A proper validation should be in the form https://user-images.githubusercontent.com/78422663/177928117-05e6e68f-5792-4121-a3b7-755b9a018cb7.mp4 **Issue 4: Document attribute is not available in IATI Standard Document** Steps - Document-link >> document Actual Result ![Screenshot from 2022-07-08 11-59-31](https://user-images.githubusercontent.com/78422663/177928669-b9e81689-5c54-43ae-89b4-793f5748aa18.png) Excepted Result -Please follow the IATI standard documentation( https://iatistandard.org/en/iati-standard/203/activity-standard/iati-activities/iati-activity/document-link/ ![Screenshot from 2022-07-08 12-00-24](https://user-images.githubusercontent.com/78422663/177928911-7f69f6b7-c5a4-4e9e-a9a7-b5e3fca3c3b5.png) **Issue 5:Cannot edit the URL attributes** Steps - Document-link >> edit Actual Result - cannot delete the URL and save it. Excepted Result -Edit functionality should work properly. https://user-images.githubusercontent.com/78422663/177962161-ffb8ccf9-e356-47c7-8d28-99c5d071f4cb.mp4 **Issue 6:Tooltip is missing from the attribute in the edit form** Actual Result ![Screenshot from 2022-07-08 15-15-58](https://user-images.githubusercontent.com/78422663/177963167-35823272-c1c7-4628-a3ce-320c52997cc2.png) Expected Result - The tooltip should be present in the edit form. **Issue 7: The tooltip doesn't provide appropriate information** Actual Result https://user-images.githubusercontent.com/78422663/177965530-5e18bc03-7032-472c-b0a4-ce2862e7f765.mp4 Expected Result - A tooltip should provide appropriate information. **Issue 8: An unwanted message for user ** Steps - Document-link >> click save button Actual Result -Unwanted message is displayed Excepted Result -This message should be hidden. https://user-images.githubusercontent.com/78422663/178209794-89dfca84-4ae9-4707-8996-d780e77b650a.mp4
priority
bug activity document link issue context desktop chrome precondition username ram password for created activity issue when a user attempts to upload more than one url an error occurs steps document link click add more document link enter more than one url in edit form actual result an error occurs excepted result error shouldn t occur issue inappropriate attribute name actual result excepted result attribute name should be iso date please refer to the iati document issue repeating the same information steps document link code select the same drop down option multiple times actual result page gets saved excepted result a proper validation should be in the form issue document attribute is not available in iati standard document steps document link document actual result excepted result please follow the iati standard documentation issue cannot edit the url attributes steps document link edit actual result cannot delete the url and save it excepted result edit functionality should work properly issue tooltip is missing from the attribute in the edit form actual result expected result the tooltip should be present in the edit form issue the tooltip doesn t provide appropriate information actual result expected result a tooltip should provide appropriate information issue an unwanted message for user steps document link click save button actual result unwanted message is displayed excepted result this message should be hidden
1
661,227
22,044,348,120
IssuesEvent
2022-05-29 20:55:55
leih-lokal/LeihLokalVerwaltung
https://api.github.com/repos/leih-lokal/LeihLokalVerwaltung
closed
Change rental item after saving -> change status of both items
bug priority high
I found a reason why sometimes items are marked as "rented" when they are not: 1. When we create a rental, the item get's marked as "unavailable" 2. When we now see a mistake in the item number and open the rental and change it, the old item stays "unavailable", while the newly inserted item is correctly marked as "unavailable" as well How could we fix this? Does the form have access to it's previous content?
1.0
Change rental item after saving -> change status of both items - I found a reason why sometimes items are marked as "rented" when they are not: 1. When we create a rental, the item get's marked as "unavailable" 2. When we now see a mistake in the item number and open the rental and change it, the old item stays "unavailable", while the newly inserted item is correctly marked as "unavailable" as well How could we fix this? Does the form have access to it's previous content?
priority
change rental item after saving change status of both items i found a reason why sometimes items are marked as rented when they are not when we create a rental the item get s marked as unavailable when we now see a mistake in the item number and open the rental and change it the old item stays unavailable while the newly inserted item is correctly marked as unavailable as well how could we fix this does the form have access to it s previous content
1
335,346
10,152,315,263
IssuesEvent
2019-08-05 23:11:02
teambit/bit
https://api.github.com/repos/teambit/bit
closed
Workspace configuration's overrides should include imported components
area/config priority/high type/feature
### Description When using the `overrides` feature in a project, the rules defined in it must apply to imported components as well. ### Describe the solution you'd like All override rules from the workspace should apply to imported components that fit the glob-pattern (or specific ID). Additionally, Bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component (which should act as an override rule for a specific-ID in the workspace config). ### Additional context This feature should only be released when #1863 is ready.
1.0
Workspace configuration's overrides should include imported components - ### Description When using the `overrides` feature in a project, the rules defined in it must apply to imported components as well. ### Describe the solution you'd like All override rules from the workspace should apply to imported components that fit the glob-pattern (or specific ID). Additionally, Bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component (which should act as an override rule for a specific-ID in the workspace config). ### Additional context This feature should only be released when #1863 is ready.
priority
workspace configuration s overrides should include imported components description when using the overrides feature in a project the rules defined in it must apply to imported components as well describe the solution you d like all override rules from the workspace should apply to imported components that fit the glob pattern or specific id additionally bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component which should act as an override rule for a specific id in the workspace config additional context this feature should only be released when is ready
1
581,347
17,291,637,451
IssuesEvent
2021-07-24 22:04:39
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
reopened
Fetch more messages functionality is gone in public chats
Chat bug high-priority
`Fetch more messages` is missing now. It should be present across the chats (1x1, public chat, group chat, community channel) ![image](https://user-images.githubusercontent.com/82375995/124922141-e070a580-e001-11eb-951d-2e1390f42209.png) <img width="1012" alt="Screenshot 2021-07-08 at 15 34 07" src="https://user-images.githubusercontent.com/82375995/124922282-05651880-e002-11eb-8d44-e86c3fd5f903.png">
1.0
Fetch more messages functionality is gone in public chats - `Fetch more messages` is missing now. It should be present across the chats (1x1, public chat, group chat, community channel) ![image](https://user-images.githubusercontent.com/82375995/124922141-e070a580-e001-11eb-951d-2e1390f42209.png) <img width="1012" alt="Screenshot 2021-07-08 at 15 34 07" src="https://user-images.githubusercontent.com/82375995/124922282-05651880-e002-11eb-8d44-e86c3fd5f903.png">
priority
fetch more messages functionality is gone in public chats fetch more messages is missing now it should be present across the chats public chat group chat community channel img width alt screenshot at src
1
787,140
27,708,068,198
IssuesEvent
2023-03-14 12:35:22
vscentrum/vsc-software-stack
https://api.github.com/repos/vscentrum/vsc-software-stack
closed
KrakenUniq
difficulty: easy C/C++ Perl new priority: high site:ugent
* link to support ticket: [#2023021460001804](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=110217( * website: https://github.com/fbreitwieser/krakenuniq * installation docs: https://github.com/fbreitwieser/krakenuniq#installation + https://github.com/fbreitwieser/krakenuniq/blob/master/install_krakenuniq.sh * toolchain: `...` * easyblock to use: `...` * required dependencies: * [ ] Jellyfish * optional dependencies: * [ ] ... * notes: * only use `install_krakenuniq.sh` to understand how to install `KrakenUniq`, don't use it directly * effort: *(TBD)*
1.0
KrakenUniq - * link to support ticket: [#2023021460001804](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=110217( * website: https://github.com/fbreitwieser/krakenuniq * installation docs: https://github.com/fbreitwieser/krakenuniq#installation + https://github.com/fbreitwieser/krakenuniq/blob/master/install_krakenuniq.sh * toolchain: `...` * easyblock to use: `...` * required dependencies: * [ ] Jellyfish * optional dependencies: * [ ] ... * notes: * only use `install_krakenuniq.sh` to understand how to install `KrakenUniq`, don't use it directly * effort: *(TBD)*
priority
krakenuniq link to support ticket website installation docs toolchain easyblock to use required dependencies jellyfish optional dependencies notes only use install krakenuniq sh to understand how to install krakenuniq don t use it directly effort tbd
1
9,987
2,609,851,996
IssuesEvent
2015-02-26 17:04:58
learningequality/ka-lite
https://api.github.com/repos/learningequality/ka-lite
closed
Manage tab not working for coach user.
bug has PR high priority ux issue
Branch: `develop` Expected behavior: when clicking the manage tab it must active and show manage navigation. Steps to reproduce: 1. log-in as coach 2. navigate to manage tab. ![ka-lite-a](https://cloud.githubusercontent.com/assets/8663934/6387949/a16989bc-bdce-11e4-8543-22c49d9bfc4f.gif)
1.0
Manage tab not working for coach user. - Branch: `develop` Expected behavior: when clicking the manage tab it must active and show manage navigation. Steps to reproduce: 1. log-in as coach 2. navigate to manage tab. ![ka-lite-a](https://cloud.githubusercontent.com/assets/8663934/6387949/a16989bc-bdce-11e4-8543-22c49d9bfc4f.gif)
priority
manage tab not working for coach user branch develop expected behavior when clicking the manage tab it must active and show manage navigation steps to reproduce log in as coach navigate to manage tab
1
245,224
7,882,745,971
IssuesEvent
2018-06-27 00:30:28
steemit/devportal
https://api.github.com/repos/steemit/devportal
closed
DP: Rake: Add Rake tasks for -py -rb -* repos
5 priority/high
Model after #163 **AC** - [x] task for https://github.com/steemit/devportal-tutorials-py - [x] task for https://github.com/steemit/devportal-tutorials-rb - [x] task for updating py, rb, js in one command
1.0
DP: Rake: Add Rake tasks for -py -rb -* repos - Model after #163 **AC** - [x] task for https://github.com/steemit/devportal-tutorials-py - [x] task for https://github.com/steemit/devportal-tutorials-rb - [x] task for updating py, rb, js in one command
priority
dp rake add rake tasks for py rb repos model after ac task for task for task for updating py rb js in one command
1
477,082
13,755,184,023
IssuesEvent
2020-10-06 18:03:29
RoboJackets/robocup-software
https://api.github.com/repos/RoboJackets/robocup-software
closed
Port referee to ROS
area / support exp / master (4) priority / high type / new feature
## Description Remove the current `RefereeModule` and port the functionality to ROS ## Requirements - The referee is able to listen to external commands when configured with external referee - The referee ignores external commands and exposes services for quick restart/commands - Existing functionality is not lost
1.0
Port referee to ROS - ## Description Remove the current `RefereeModule` and port the functionality to ROS ## Requirements - The referee is able to listen to external commands when configured with external referee - The referee ignores external commands and exposes services for quick restart/commands - Existing functionality is not lost
priority
port referee to ros description remove the current refereemodule and port the functionality to ros requirements the referee is able to listen to external commands when configured with external referee the referee ignores external commands and exposes services for quick restart commands existing functionality is not lost
1
659,118
21,916,416,617
IssuesEvent
2022-05-21 22:27:49
ProjectG-Plugins/CrossplatForms
https://api.github.com/repos/ProjectG-Plugins/CrossplatForms
closed
Luckperms integration on proxies
enhancement priority: high bungeecord velocity
### What feature do you want to see added? BungeeCord and Velocity don't have any sort of good native permission system to register permissions and defaults to. Hook into LuckPerms if present to do that stuff. ### Are there any alternatives? suffer
1.0
Luckperms integration on proxies - ### What feature do you want to see added? BungeeCord and Velocity don't have any sort of good native permission system to register permissions and defaults to. Hook into LuckPerms if present to do that stuff. ### Are there any alternatives? suffer
priority
luckperms integration on proxies what feature do you want to see added bungeecord and velocity don t have any sort of good native permission system to register permissions and defaults to hook into luckperms if present to do that stuff are there any alternatives suffer
1
679,062
23,220,168,418
IssuesEvent
2022-08-02 17:24:27
edgedb/edgedb
https://api.github.com/repos/edgedb/edgedb
closed
Adding links to types that have a global view puts schema into wedged state
high priority migrations
<!-- Please search existing issues to avoid creating duplicates. --> - EdgeDB Version: EdgeDB CLI 2.0.1+93a7fab EdgeDB 2.0+710b779 - OS Version: Linux LTS Can't create migration on provided schema for: - removing .email index on member's object (member.esdl); - removing .email exclusive constraint on member's object; Steps to Reproduce: 1. Create first migration and apply; 2. remove .email index or exclusive constraint from member's object SDL; 3. create migration; <!-- If the issue is about a query error, please also provide your schema --> Schema: [schema](https://github.com/lu-zen/edgedb-migration-creation-bug/)
1.0
Adding links to types that have a global view puts schema into wedged state - <!-- Please search existing issues to avoid creating duplicates. --> - EdgeDB Version: EdgeDB CLI 2.0.1+93a7fab EdgeDB 2.0+710b779 - OS Version: Linux LTS Can't create migration on provided schema for: - removing .email index on member's object (member.esdl); - removing .email exclusive constraint on member's object; Steps to Reproduce: 1. Create first migration and apply; 2. remove .email index or exclusive constraint from member's object SDL; 3. create migration; <!-- If the issue is about a query error, please also provide your schema --> Schema: [schema](https://github.com/lu-zen/edgedb-migration-creation-bug/)
priority
adding links to types that have a global view puts schema into wedged state edgedb version edgedb cli edgedb os version linux lts can t create migration on provided schema for removing email index on member s object member esdl removing email exclusive constraint on member s object steps to reproduce create first migration and apply remove email index or exclusive constraint from member s object sdl create migration schema
1
419,191
12,218,777,339
IssuesEvent
2020-05-01 20:06:57
ampproject/amphtml
https://api.github.com/repos/ampproject/amphtml
opened
Implement a `gulp release` task to build a complete AMP release
P1: High Priority Type: Feature Request WG: infra
Currently Google is the only body that has the complete picture on how AMP releases are created. As a result, it is not easy for open-source contributors to execute the exact same set of commands and processes without foreknowledge (@mdmower is working on improving the documentation around this, see #27100) Google has a shell script that performs various other gulp tasks, file moves, and other manipulations to take a code snapshot of the amphtml repository and make a complete, deliverable AMP release. This effort is three-fold: * Make the process available for others, e.g., to make self-hosting easier * Make this a "one command" process - running `gulp release` anywhere should generate the same exact files that the CDN serves * Reduce the technical debt caused by five years of patch-work on the Google-internal release shell script // cc @ampproject/wg-foundation-onboarding
1.0
Implement a `gulp release` task to build a complete AMP release - Currently Google is the only body that has the complete picture on how AMP releases are created. As a result, it is not easy for open-source contributors to execute the exact same set of commands and processes without foreknowledge (@mdmower is working on improving the documentation around this, see #27100) Google has a shell script that performs various other gulp tasks, file moves, and other manipulations to take a code snapshot of the amphtml repository and make a complete, deliverable AMP release. This effort is three-fold: * Make the process available for others, e.g., to make self-hosting easier * Make this a "one command" process - running `gulp release` anywhere should generate the same exact files that the CDN serves * Reduce the technical debt caused by five years of patch-work on the Google-internal release shell script // cc @ampproject/wg-foundation-onboarding
priority
implement a gulp release task to build a complete amp release currently google is the only body that has the complete picture on how amp releases are created as a result it is not easy for open source contributors to execute the exact same set of commands and processes without foreknowledge mdmower is working on improving the documentation around this see google has a shell script that performs various other gulp tasks file moves and other manipulations to take a code snapshot of the amphtml repository and make a complete deliverable amp release this effort is three fold make the process available for others e g to make self hosting easier make this a one command process running gulp release anywhere should generate the same exact files that the cdn serves reduce the technical debt caused by five years of patch work on the google internal release shell script cc ampproject wg foundation onboarding
1
212,884
7,243,700,944
IssuesEvent
2018-02-14 12:45:36
pmem/issues
https://api.github.com/repos/pmem/issues
opened
tests: port RUNTEST functionality from linux to windows (KEEP_GOING=y & CLEAN_FAILED=y)
Exposure: High OS: Windows Priority: 4 low Type: Feature
It would be good if on Windows there was a way to run tests without stopping on first failed test like on Linux. ``` # Normally the first failed test terminates the test run. If KEEP_GOING # is set, continues executing all tests. If any tests fail, once all tests # have completed reports number of failures, lists failed tests and exits # with error status. # #KEEP_GOING=y # # This option works only if KEEP_GOING=y, then if CLEAN_FAILED is set # all data created by test is removed on test failure. # #CLEAN_FAILED=y ```
1.0
tests: port RUNTEST functionality from linux to windows (KEEP_GOING=y & CLEAN_FAILED=y) - It would be good if on Windows there was a way to run tests without stopping on first failed test like on Linux. ``` # Normally the first failed test terminates the test run. If KEEP_GOING # is set, continues executing all tests. If any tests fail, once all tests # have completed reports number of failures, lists failed tests and exits # with error status. # #KEEP_GOING=y # # This option works only if KEEP_GOING=y, then if CLEAN_FAILED is set # all data created by test is removed on test failure. # #CLEAN_FAILED=y ```
priority
tests port runtest functionality from linux to windows keep going y clean failed y it would be good if on windows there was a way to run tests without stopping on first failed test like on linux normally the first failed test terminates the test run if keep going is set continues executing all tests if any tests fail once all tests have completed reports number of failures lists failed tests and exits with error status keep going y this option works only if keep going y then if clean failed is set all data created by test is removed on test failure clean failed y
1
491,301
14,148,598,527
IssuesEvent
2020-11-10 22:52:47
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
[complex] torch.{sqrt, abs}: does not match numpy
high priority module: complex module: correctness (silent) triaged
Happens only on CPU **torch.cfloat** ```python >>> import torch >>> torch.__version__ '1.8.0a0+73e121d' >>> import numpy as np >>> x = torch.tensor(0.-1.0000e+20j) >>> x.sqrt() tensor(inf-infj) >>> np.sqrt(x.numpy()) (7071067600-7071067600j) ``` **torch.cdouble** ```python >>> import torch >>> x = torch.tensor(-1.0000e+20-4988429.2000j, dtype=torch.complex128) >>> x.sqrt() tensor(0.-1.0000e+10j, dtype=torch.complex128) >>> import numpy as np >>> np.sqrt(x.numpy()) (0.00024942146-10000000000j) >>> x.cuda().sqrt().cpu().numpy() == np.sqrt(x.numpy()) True ``` cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff @anjali411 @dylanbespalko @mruberry @vishwakftw @jianyuh @nikitaved @pearu
1.0
[complex] torch.{sqrt, abs}: does not match numpy - Happens only on CPU **torch.cfloat** ```python >>> import torch >>> torch.__version__ '1.8.0a0+73e121d' >>> import numpy as np >>> x = torch.tensor(0.-1.0000e+20j) >>> x.sqrt() tensor(inf-infj) >>> np.sqrt(x.numpy()) (7071067600-7071067600j) ``` **torch.cdouble** ```python >>> import torch >>> x = torch.tensor(-1.0000e+20-4988429.2000j, dtype=torch.complex128) >>> x.sqrt() tensor(0.-1.0000e+10j, dtype=torch.complex128) >>> import numpy as np >>> np.sqrt(x.numpy()) (0.00024942146-10000000000j) >>> x.cuda().sqrt().cpu().numpy() == np.sqrt(x.numpy()) True ``` cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff @anjali411 @dylanbespalko @mruberry @vishwakftw @jianyuh @nikitaved @pearu
priority
torch sqrt abs does not match numpy happens only on cpu torch cfloat python import torch torch version import numpy as np x torch tensor x sqrt tensor inf infj np sqrt x numpy torch cdouble python import torch x torch tensor dtype torch x sqrt tensor dtype torch import numpy as np np sqrt x numpy x cuda sqrt cpu numpy np sqrt x numpy true cc ezyang gchanan bdhirsh heitorschueroff dylanbespalko mruberry vishwakftw jianyuh nikitaved pearu
1
141,329
5,434,988,910
IssuesEvent
2017-03-05 13:07:50
sync-for-science/tracking
https://api.github.com/repos/sync-for-science/tracking
closed
Create a locked-down local stack
high-priority
Goal: a docker stack that can run on a locked-down local laptop, including: * `api` (not mapped to external network) * `tests` (base url: https://test-suite.pilot-prep.syncfor.science) * `research-app-api` (not mapped to network) * `app` (base url: https://demo-app.pilot-prep.syncfor.science) * `nginx` ... this should include an `nginx` instance that mounts SSL certs/keys for the two relevant domains, and a static config file that proxies the two domains to the `tests` and `research-app-api`. TODO: - [x] @jmandel get SSL certificates for the two subdomains - [x] @jmandel create a docker-compose file + nginx container to make this work :-) - [ ] @erikwiffin document the OAuth client details for both apps, so we can share them with the Pilot vendors
1.0
Create a locked-down local stack - Goal: a docker stack that can run on a locked-down local laptop, including: * `api` (not mapped to external network) * `tests` (base url: https://test-suite.pilot-prep.syncfor.science) * `research-app-api` (not mapped to network) * `app` (base url: https://demo-app.pilot-prep.syncfor.science) * `nginx` ... this should include an `nginx` instance that mounts SSL certs/keys for the two relevant domains, and a static config file that proxies the two domains to the `tests` and `research-app-api`. TODO: - [x] @jmandel get SSL certificates for the two subdomains - [x] @jmandel create a docker-compose file + nginx container to make this work :-) - [ ] @erikwiffin document the OAuth client details for both apps, so we can share them with the Pilot vendors
priority
create a locked down local stack goal a docker stack that can run on a locked down local laptop including api not mapped to external network tests base url research app api not mapped to network app base url nginx this should include an nginx instance that mounts ssl certs keys for the two relevant domains and a static config file that proxies the two domains to the tests and research app api todo jmandel get ssl certificates for the two subdomains jmandel create a docker compose file nginx container to make this work erikwiffin document the oauth client details for both apps so we can share them with the pilot vendors
1
29,770
2,717,333,881
IssuesEvent
2015-04-11 05:40:11
cs2103jan2015-f13-2j/main
https://api.github.com/repos/cs2103jan2015-f13-2j/main
closed
bug in "add" command. normal description are interpreted as keyword.
priority.high type.bug
A floating task such as below will be interpreted wrongly, due to the way of identifying keywords: add priority booking pass from jetstar the keyword "priority" and "from" will be interpreted and identified, but error occurs.
1.0
bug in "add" command. normal description are interpreted as keyword. - A floating task such as below will be interpreted wrongly, due to the way of identifying keywords: add priority booking pass from jetstar the keyword "priority" and "from" will be interpreted and identified, but error occurs.
priority
bug in add command normal description are interpreted as keyword a floating task such as below will be interpreted wrongly due to the way of identifying keywords add priority booking pass from jetstar the keyword priority and from will be interpreted and identified but error occurs
1
61,812
3,154,767,269
IssuesEvent
2015-09-17 02:33:12
cs2103aug2015-t10-3j/main
https://api.github.com/repos/cs2103aug2015-t10-3j/main
closed
A user can can add tags to tasks
priority.high
so that he can search through tags for tasks which heI may not remember fully.
1.0
A user can can add tags to tasks - so that he can search through tags for tasks which heI may not remember fully.
priority
a user can can add tags to tasks so that he can search through tags for tasks which hei may not remember fully
1
290,929
8,915,008,730
IssuesEvent
2019-01-19 01:08:07
QuantEcon/lecture-source-jl
https://api.github.com/repos/QuantEcon/lecture-source-jl
closed
Equality in Tests
bug high-priority
Basically, we need to make sure that whenever we're comparing a value to a float, we're using approximation (maybe with some tolerance, or a truncated value) and not strict equality. That is: ``` @test x == 1.234567863126371 # BAD! @test x ≈ 1.234567863126371 # Good @test x ≈ 1.23456786 # Also good, if necessary. ``` This will also need to be built in a separate PR and evaluated cross-platform (another plus for containerization...) And add it to the style guide, too, for good measure.
1.0
Equality in Tests - Basically, we need to make sure that whenever we're comparing a value to a float, we're using approximation (maybe with some tolerance, or a truncated value) and not strict equality. That is: ``` @test x == 1.234567863126371 # BAD! @test x ≈ 1.234567863126371 # Good @test x ≈ 1.23456786 # Also good, if necessary. ``` This will also need to be built in a separate PR and evaluated cross-platform (another plus for containerization...) And add it to the style guide, too, for good measure.
priority
equality in tests basically we need to make sure that whenever we re comparing a value to a float we re using approximation maybe with some tolerance or a truncated value and not strict equality that is test x bad test x ≈ good test x ≈ also good if necessary this will also need to be built in a separate pr and evaluated cross platform another plus for containerization and add it to the style guide too for good measure
1
547,914
16,049,368,844
IssuesEvent
2021-04-22 17:07:04
sopra-fs21-group-22/client
https://api.github.com/repos/sopra-fs21-group-22/client
closed
Card Deck with 80 cards
high priority task
- [x] There has to be a limited amount of cards in the game, with a certain amount of duplicates for each card, therefore the chance of drawing e.g. a ‘BANG’ card should be higher than e.g. the ‘BEER’ card - [x] Each card should show the basic information of what it does ⏰ Time estimate: 1h 📌 This task is part of the user story #5.
1.0
Card Deck with 80 cards - - [x] There has to be a limited amount of cards in the game, with a certain amount of duplicates for each card, therefore the chance of drawing e.g. a ‘BANG’ card should be higher than e.g. the ‘BEER’ card - [x] Each card should show the basic information of what it does ⏰ Time estimate: 1h 📌 This task is part of the user story #5.
priority
card deck with cards there has to be a limited amount of cards in the game with a certain amount of duplicates for each card therefore the chance of drawing e g a ‘bang’ card should be higher than e g the ‘beer’ card each card should show the basic information of what it does ⏰ time estimate 📌 this task is part of the user story
1
636,999
20,617,032,907
IssuesEvent
2022-03-07 14:11:16
Sciebo-RDS/Sciebo-RDS
https://api.github.com/repos/Sciebo-RDS/Sciebo-RDS
closed
Owncloud Share interrupts describo load
bug high-priority
**Describe the bug** If you select a shared folder, describo cannot interact with it. **To Reproduce** Steps to reproduce the behavior: 1. Share a folder 2. open this folder in describo 3. See error in console **Expected behavior** Describo opens the folder and uses the metadata file. <!-- In general --> **Additional context** This bug is a bumper, because the most research projects will be shared with collegues, so sciebo RDS is not usable at all, if this bug occurs.
1.0
Owncloud Share interrupts describo load - **Describe the bug** If you select a shared folder, describo cannot interact with it. **To Reproduce** Steps to reproduce the behavior: 1. Share a folder 2. open this folder in describo 3. See error in console **Expected behavior** Describo opens the folder and uses the metadata file. <!-- In general --> **Additional context** This bug is a bumper, because the most research projects will be shared with collegues, so sciebo RDS is not usable at all, if this bug occurs.
priority
owncloud share interrupts describo load describe the bug if you select a shared folder describo cannot interact with it to reproduce steps to reproduce the behavior share a folder open this folder in describo see error in console expected behavior describo opens the folder and uses the metadata file additional context this bug is a bumper because the most research projects will be shared with collegues so sciebo rds is not usable at all if this bug occurs
1
601,763
18,430,866,382
IssuesEvent
2021-10-14 07:31:29
zowe/api-layer
https://api.github.com/repos/zowe/api-layer
closed
X.509 client certificate authentication support for API Mediation Layer
enhancement in progress Priority: High squad 20PI3 Objective 20PI4 Security Epic 21PI2 21PI3 21PI4
As a system admin / security admin, I want to allow Zowe users and client applications (such as Zowe clients and custom applications) to authenticate with Zowe API ML using client certificates (x.509) which are industry-proven as more secure than credential authentication. **Deliverable**: Zowe API ML can validate client certificates by using ESM to map the certificate with the user mainframe identity and issue a JWT.
1.0
X.509 client certificate authentication support for API Mediation Layer - As a system admin / security admin, I want to allow Zowe users and client applications (such as Zowe clients and custom applications) to authenticate with Zowe API ML using client certificates (x.509) which are industry-proven as more secure than credential authentication. **Deliverable**: Zowe API ML can validate client certificates by using ESM to map the certificate with the user mainframe identity and issue a JWT.
priority
x client certificate authentication support for api mediation layer as a system admin security admin i want to allow zowe users and client applications such as zowe clients and custom applications to authenticate with zowe api ml using client certificates x which are industry proven as more secure than credential authentication deliverable zowe api ml can validate client certificates by using esm to map the certificate with the user mainframe identity and issue a jwt
1