Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
855
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
13 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
240k
binary_label
int64
0
1
741,106
25,779,706,755
IssuesEvent
2022-12-09 14:56:03
OregonDigital/OD2
https://api.github.com/repos/OregonDigital/OD2
closed
New Collection creation should allow setting of ID value
Metadata Priority - High Features Ready for Development
### Descriptive summary Creating a new Digital Collection should allow the creator to specify the ID value. We've created all existing collections manually with specified pids, and don't want to start using opaque IDs. These are very visible parts of the URL for linked and advertised collections. Examples: `or-latino-herit` and `orange-owl` Since this is already supported from the console when creating a collection, this should just require the form update and attribute handling, it shouldn't need much new code to support. ### Expected behavior ID value should be required, validated and restricted to only: `a-z`, `0-9`, `-` and `_`. This should also be allowed when creating 'OAI Set' collection types. Metadata Team was thinking not for User Collections though, this box can stay hidden. ### Accessibility Concerns n/a
1.0
New Collection creation should allow setting of ID value - ### Descriptive summary Creating a new Digital Collection should allow the creator to specify the ID value. We've created all existing collections manually with specified pids, and don't want to start using opaque IDs. These are very visible parts of the URL for linked and advertised collections. Examples: `or-latino-herit` and `orange-owl` Since this is already supported from the console when creating a collection, this should just require the form update and attribute handling, it shouldn't need much new code to support. ### Expected behavior ID value should be required, validated and restricted to only: `a-z`, `0-9`, `-` and `_`. This should also be allowed when creating 'OAI Set' collection types. Metadata Team was thinking not for User Collections though, this box can stay hidden. ### Accessibility Concerns n/a
priority
new collection creation should allow setting of id value descriptive summary creating a new digital collection should allow the creator to specify the id value we ve created all existing collections manually with specified pids and don t want to start using opaque ids these are very visible parts of the url for linked and advertised collections examples or latino herit and orange owl since this is already supported from the console when creating a collection this should just require the form update and attribute handling it shouldn t need much new code to support expected behavior id value should be required validated and restricted to only a z and this should also be allowed when creating oai set collection types metadata team was thinking not for user collections though this box can stay hidden accessibility concerns n a
1
164,608
6,229,756,292
IssuesEvent
2017-07-11 05:34:07
GeekyAnts/NativeBase
https://api.github.com/repos/GeekyAnts/NativeBase
closed
Input value and floating label
4 high priority
Hi, I'm using floating label on a simple input. When I pass value to this component, the label subscribe the value. If I enter the input and press something, label works nice. I tried autoFocus but when I just exit, label subscribe again. ![image](https://cloud.githubusercontent.com/assets/18454608/23803451/5bbb30f8-0595-11e7-9827-49cc65abcef5.png) ![image](https://cloud.githubusercontent.com/assets/18454608/23803344/f9053e86-0594-11e7-8a75-9076159e398c.png) "native-base": "^2.0.12", "react-native": "0.42.0", "react": "15.4.2"
1.0
Input value and floating label - Hi, I'm using floating label on a simple input. When I pass value to this component, the label subscribe the value. If I enter the input and press something, label works nice. I tried autoFocus but when I just exit, label subscribe again. ![image](https://cloud.githubusercontent.com/assets/18454608/23803451/5bbb30f8-0595-11e7-9827-49cc65abcef5.png) ![image](https://cloud.githubusercontent.com/assets/18454608/23803344/f9053e86-0594-11e7-8a75-9076159e398c.png) "native-base": "^2.0.12", "react-native": "0.42.0", "react": "15.4.2"
priority
input value and floating label hi i m using floating label on a simple input when i pass value to this component the label subscribe the value if i enter the input and press something label works nice i tried autofocus but when i just exit label subscribe again native base react native react
1
422,364
12,270,564,322
IssuesEvent
2020-05-07 15:41:32
smartdevicelink/sdl_java_suite
https://api.github.com/repos/smartdevicelink/sdl_java_suite
closed
Vehicle Data classes are not scalable
high priority
There are currently 25 supported vehicle data types with demand to add even more in the future. However, current vehicle data RPC classes contain a distinct setter/getter method for each individual vehicle data type - which means each RPC related to vehicle data needs to define 50 separate methods. As we add more vehicle data types, 2 additional methods (a setter & getter) will need to be added to each RPC. A better way would be to define all vehicle data types in an enum and provide a generic hash map that maps the vehicle data enum to it's associated object (depends on the specific RPC in question). In this design, adding a new type of vehicle data only requires that it be added to the enum.
1.0
Vehicle Data classes are not scalable - There are currently 25 supported vehicle data types with demand to add even more in the future. However, current vehicle data RPC classes contain a distinct setter/getter method for each individual vehicle data type - which means each RPC related to vehicle data needs to define 50 separate methods. As we add more vehicle data types, 2 additional methods (a setter & getter) will need to be added to each RPC. A better way would be to define all vehicle data types in an enum and provide a generic hash map that maps the vehicle data enum to it's associated object (depends on the specific RPC in question). In this design, adding a new type of vehicle data only requires that it be added to the enum.
priority
vehicle data classes are not scalable there are currently supported vehicle data types with demand to add even more in the future however current vehicle data rpc classes contain a distinct setter getter method for each individual vehicle data type which means each rpc related to vehicle data needs to define separate methods as we add more vehicle data types additional methods a setter getter will need to be added to each rpc a better way would be to define all vehicle data types in an enum and provide a generic hash map that maps the vehicle data enum to it s associated object depends on the specific rpc in question in this design adding a new type of vehicle data only requires that it be added to the enum
1
707,311
24,301,901,726
IssuesEvent
2022-09-29 14:26:24
owncloud/ocis
https://api.github.com/repos/owncloud/ocis
closed
Wrong link generation for space & share link
Type:Bug Priority:p2-high
When inviting people to a space there is a bug in the share link that is generated for the email: https://github.com/owncloud/ocis/blob/a5521161f8c5ec60186db3d6d0b06cd420b2915e/services/notifications/pkg/service/service.go#L159 same for shares: https://github.com/owncloud/ocis/blob/a5521161f8c5ec60186db3d6d0b06cd420b2915e/services/notifications/pkg/service/service.go#L272 It uses the `.Idp` url instead of the instance url.
1.0
Wrong link generation for space & share link - When inviting people to a space there is a bug in the share link that is generated for the email: https://github.com/owncloud/ocis/blob/a5521161f8c5ec60186db3d6d0b06cd420b2915e/services/notifications/pkg/service/service.go#L159 same for shares: https://github.com/owncloud/ocis/blob/a5521161f8c5ec60186db3d6d0b06cd420b2915e/services/notifications/pkg/service/service.go#L272 It uses the `.Idp` url instead of the instance url.
priority
wrong link generation for space share link when inviting people to a space there is a bug in the share link that is generated for the email same for shares it uses the idp url instead of the instance url
1
200,274
7,005,106,120
IssuesEvent
2017-12-19 00:01:48
leo-project/leofs
https://api.github.com/repos/leo-project/leofs
reopened
Opening AVS from different paths in parallel for faster startup
Improve Priority-HIGH survey v1.4 _leo_object_storage _leo_storage
I wonder if it's possible to make storage nodes open AVS files from different directories in parallel during startup (when having more than one AVS directory)? Example of a problem, I was restaring nodes one by one and each node took 10-15 minutes to startup: ``` ноя 16 19:40:36 stor01.selectel.cloud.lan systemd[1]: Starting LeoFS storage node... ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Config path: /etc/leofs/leo_storage ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Exec: /usr/local/leofs/current/leo_storage/erts-8.3.5.3/bin/erlexec -noinput -boot /usr/local/leofs/current/leo_storage/releases/1/leo_storage -mode minimal -config /etc/leofs/leo_storage/app.config -args_file /etc/leofs/leo_storage/vm.args -- console ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Root: /usr/local/leofs/current/leo_storage ноя 16 19:40:37 stor01.selectel.cloud.lan leo_storage[2649]: =INFO REPORT==== 16-Nov-2017::19:40:37 === ноя 16 19:40:37 stor01.selectel.cloud.lan leo_storage[2649]: Setup Lager Logger API ноя 16 19:40:38 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_63, path:"/mnt/avs1/bodies/log/", filename:"leo_object_storage_63" ноя 16 19:40:38 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs1/bodies/log/leo_object_storage_63.20171116.19.2 ... ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_0, path:"/mnt/avs1/bodies/log/", filename:"leo_object_storage_0" ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs1/bodies/log/leo_object_storage_0.20171116.19.2 ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_10063, path:"/mnt/avs2/bodies/log/", filename:"leo_object_storage_10063" ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs2/bodies/log/leo_object_storage_10063.20171116.19.2 ... ноя 16 19:51:51 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_30000, path:"/mnt/avs4/bodies/log/", filename:"leo_object_storage_30000" ноя 16 19:51:51 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs4/bodies/log/leo_object_storage_30000.20171116.19.2 ноя 16 19:51:57 stor01.selectel.cloud.lan systemd[1]: Started LeoFS storage node. ``` It could've been 4 times faster if different AVS paths are loaded at the same time. This is only important when restarting node once; if I rrestart them again right away without any load, the .log files in metadata/ directory (which are opened very slowly and are the reason for this problem) are compacted and zeroed and second restart is very fast (or it could be that it would've been fast even if they weren't compacted, as their contents will be kept in OS disk cache for some time). #### About origins of the problem This is unrelated to this question about parallel open, I just wanted to check why it's so slow. Each log takes from second to 5-6 second to open, which is IO bound operation on HDD: ``` Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,00 45,00 1277,00 25,00 5560,00 262,00 8,94 142,44 107,31 108,19 62,20 0,77 100,00 ``` The problem here is that write logs are highly fragmented. This was performed on drive holding AVS right after stopping node but before starting it (no IO load on this drive other than from this command): ``` [vm@stor02 metadata]$ for a in ?????/*log; do ls -l $a; /usr/bin/time cat $a > /dev/null; done -rw-r--r--. 1 leofs leofs 21000224 ноя 16 19:58 30000/000008.log 0.00user 0.01system 0:04.21elapsed 0%CPU (0avgtext+0avgdata 716maxresident)k 41016inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 21179598 ноя 16 19:58 30001/000008.log 0.00user 0.01system 0:03.32elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k 41360inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 21432876 ноя 16 19:58 30002/000008.log 0.00user 0.00system 0:00.37elapsed 1%CPU (0avgtext+0avgdata 720maxresident)k 41856inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 20755621 ноя 16 19:58 30003/000008.log 0.00user 0.01system 0:03.26elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k 40536inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 20944444 ноя 16 19:58 30004/000008.log 0.00user 0.01system 0:04.08elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k ``` This is related to #917 - naturally, setting write buffer size to smaller value would reduce this problem. However, the real cause is fragmentation; xfs_bmap shows that each .log file has thousands of fragments, 8 or 16 512B blocks each (**EDIT**: it's in 512B byte blocks even though filesystem uses normal 4K blocks, must be just the way this tool shows it. In reality each fragment is 1 or 2 4K blocks): ``` [vm@stor03 metadata]$ xfs_bmap 30041/000008.log|head -n 6 30041/000008.log: 0: [0..15]: 2904666744..2904666759 1: [16..23]: 2904666840..2904666847 2: [24..31]: 2904667528..2904667535 3: [32..39]: 2904668488..2904668495 4: [40..47]: 2904669192..2904669199 [vm@stor03 metadata]$ xfs_bmap 30041/000008.log|wc -l 2228 ``` (I've got even bigger numbers sometimes, like 4350 fragments per 19 MB file). This is quite unlike .sst files: ``` [vm@stor03 metadata]$ xfs_bmap 30052/sst_3/000005.sst 30052/sst_3/000005.sst: 0: [0..54687]: 9441083368..9441138055 ``` I understand that it happens due to the way .log files are written and synced. But I wonder if it's possible to call fallocate() or posix_fallocate() when creating .log file (supplying write buffer size as maximum size)? This should force filesystem to preallocate continuous space for this file on disk and avoid this problem (at least on Ext4 and XFS, naturally it won't help CoW filesystems). I think it might solve this problem, however it's hard to predict how it would affect performance of synced writes to this log in general. (Btw, I think that RocksDB should do this out of the box, though it needs checking if it would actually work as expected here - I read that there were some plans to support that in the future in addition to LevelDB?)
1.0
Opening AVS from different paths in parallel for faster startup - I wonder if it's possible to make storage nodes open AVS files from different directories in parallel during startup (when having more than one AVS directory)? Example of a problem, I was restaring nodes one by one and each node took 10-15 minutes to startup: ``` ноя 16 19:40:36 stor01.selectel.cloud.lan systemd[1]: Starting LeoFS storage node... ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Config path: /etc/leofs/leo_storage ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Exec: /usr/local/leofs/current/leo_storage/erts-8.3.5.3/bin/erlexec -noinput -boot /usr/local/leofs/current/leo_storage/releases/1/leo_storage -mode minimal -config /etc/leofs/leo_storage/app.config -args_file /etc/leofs/leo_storage/vm.args -- console ноя 16 19:40:36 stor01.selectel.cloud.lan leo_storage[2649]: Root: /usr/local/leofs/current/leo_storage ноя 16 19:40:37 stor01.selectel.cloud.lan leo_storage[2649]: =INFO REPORT==== 16-Nov-2017::19:40:37 === ноя 16 19:40:37 stor01.selectel.cloud.lan leo_storage[2649]: Setup Lager Logger API ноя 16 19:40:38 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_63, path:"/mnt/avs1/bodies/log/", filename:"leo_object_storage_63" ноя 16 19:40:38 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs1/bodies/log/leo_object_storage_63.20171116.19.2 ... ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_0, path:"/mnt/avs1/bodies/log/", filename:"leo_object_storage_0" ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs1/bodies/log/leo_object_storage_0.20171116.19.2 ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_10063, path:"/mnt/avs2/bodies/log/", filename:"leo_object_storage_10063" ноя 16 19:43:44 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs2/bodies/log/leo_object_storage_10063.20171116.19.2 ... ноя 16 19:51:51 stor01.selectel.cloud.lan leo_storage[2649]: id:leo_diagnosis_log_30000, path:"/mnt/avs4/bodies/log/", filename:"leo_object_storage_30000" ноя 16 19:51:51 stor01.selectel.cloud.lan leo_storage[2649]: * opening log file is /mnt/avs4/bodies/log/leo_object_storage_30000.20171116.19.2 ноя 16 19:51:57 stor01.selectel.cloud.lan systemd[1]: Started LeoFS storage node. ``` It could've been 4 times faster if different AVS paths are loaded at the same time. This is only important when restarting node once; if I rrestart them again right away without any load, the .log files in metadata/ directory (which are opened very slowly and are the reason for this problem) are compacted and zeroed and second restart is very fast (or it could be that it would've been fast even if they weren't compacted, as their contents will be kept in OS disk cache for some time). #### About origins of the problem This is unrelated to this question about parallel open, I just wanted to check why it's so slow. Each log takes from second to 5-6 second to open, which is IO bound operation on HDD: ``` Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,00 45,00 1277,00 25,00 5560,00 262,00 8,94 142,44 107,31 108,19 62,20 0,77 100,00 ``` The problem here is that write logs are highly fragmented. This was performed on drive holding AVS right after stopping node but before starting it (no IO load on this drive other than from this command): ``` [vm@stor02 metadata]$ for a in ?????/*log; do ls -l $a; /usr/bin/time cat $a > /dev/null; done -rw-r--r--. 1 leofs leofs 21000224 ноя 16 19:58 30000/000008.log 0.00user 0.01system 0:04.21elapsed 0%CPU (0avgtext+0avgdata 716maxresident)k 41016inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 21179598 ноя 16 19:58 30001/000008.log 0.00user 0.01system 0:03.32elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k 41360inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 21432876 ноя 16 19:58 30002/000008.log 0.00user 0.00system 0:00.37elapsed 1%CPU (0avgtext+0avgdata 720maxresident)k 41856inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 20755621 ноя 16 19:58 30003/000008.log 0.00user 0.01system 0:03.26elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k 40536inputs+0outputs (0major+224minor)pagefaults 0swaps -rw-r--r--. 1 leofs leofs 20944444 ноя 16 19:58 30004/000008.log 0.00user 0.01system 0:04.08elapsed 0%CPU (0avgtext+0avgdata 720maxresident)k ``` This is related to #917 - naturally, setting write buffer size to smaller value would reduce this problem. However, the real cause is fragmentation; xfs_bmap shows that each .log file has thousands of fragments, 8 or 16 512B blocks each (**EDIT**: it's in 512B byte blocks even though filesystem uses normal 4K blocks, must be just the way this tool shows it. In reality each fragment is 1 or 2 4K blocks): ``` [vm@stor03 metadata]$ xfs_bmap 30041/000008.log|head -n 6 30041/000008.log: 0: [0..15]: 2904666744..2904666759 1: [16..23]: 2904666840..2904666847 2: [24..31]: 2904667528..2904667535 3: [32..39]: 2904668488..2904668495 4: [40..47]: 2904669192..2904669199 [vm@stor03 metadata]$ xfs_bmap 30041/000008.log|wc -l 2228 ``` (I've got even bigger numbers sometimes, like 4350 fragments per 19 MB file). This is quite unlike .sst files: ``` [vm@stor03 metadata]$ xfs_bmap 30052/sst_3/000005.sst 30052/sst_3/000005.sst: 0: [0..54687]: 9441083368..9441138055 ``` I understand that it happens due to the way .log files are written and synced. But I wonder if it's possible to call fallocate() or posix_fallocate() when creating .log file (supplying write buffer size as maximum size)? This should force filesystem to preallocate continuous space for this file on disk and avoid this problem (at least on Ext4 and XFS, naturally it won't help CoW filesystems). I think it might solve this problem, however it's hard to predict how it would affect performance of synced writes to this log in general. (Btw, I think that RocksDB should do this out of the box, though it needs checking if it would actually work as expected here - I read that there were some plans to support that in the future in addition to LevelDB?)
priority
opening avs from different paths in parallel for faster startup i wonder if it s possible to make storage nodes open avs files from different directories in parallel during startup when having more than one avs directory example of a problem i was restaring nodes one by one and each node took minutes to startup ноя selectel cloud lan systemd starting leofs storage node ноя selectel cloud lan leo storage config path etc leofs leo storage ноя selectel cloud lan leo storage exec usr local leofs current leo storage erts bin erlexec noinput boot usr local leofs current leo storage releases leo storage mode minimal config etc leofs leo storage app config args file etc leofs leo storage vm args console ноя selectel cloud lan leo storage root usr local leofs current leo storage ноя selectel cloud lan leo storage info report nov ноя selectel cloud lan leo storage setup lager logger api ноя selectel cloud lan leo storage id leo diagnosis log path mnt bodies log filename leo object storage ноя selectel cloud lan leo storage opening log file is mnt bodies log leo object storage ноя selectel cloud lan leo storage id leo diagnosis log path mnt bodies log filename leo object storage ноя selectel cloud lan leo storage opening log file is mnt bodies log leo object storage ноя selectel cloud lan leo storage id leo diagnosis log path mnt bodies log filename leo object storage ноя selectel cloud lan leo storage opening log file is mnt bodies log leo object storage ноя selectel cloud lan leo storage id leo diagnosis log path mnt bodies log filename leo object storage ноя selectel cloud lan leo storage opening log file is mnt bodies log leo object storage ноя selectel cloud lan systemd started leofs storage node it could ve been times faster if different avs paths are loaded at the same time this is only important when restarting node once if i rrestart them again right away without any load the log files in metadata directory which are opened very slowly and are the reason for this problem are compacted and zeroed and second restart is very fast or it could be that it would ve been fast even if they weren t compacted as their contents will be kept in os disk cache for some time about origins of the problem this is unrelated to this question about parallel open i just wanted to check why it s so slow each log takes from second to second to open which is io bound operation on hdd device rrqm s wrqm s r s w s rkb s wkb s avgrq sz avgqu sz await r await w await svctm util sda the problem here is that write logs are highly fragmented this was performed on drive holding avs right after stopping node but before starting it no io load on this drive other than from this command for a in log do ls l a usr bin time cat a dev null done rw r r leofs leofs ноя log cpu k pagefaults rw r r leofs leofs ноя log cpu k pagefaults rw r r leofs leofs ноя log cpu k pagefaults rw r r leofs leofs ноя log cpu k pagefaults rw r r leofs leofs ноя log cpu k this is related to naturally setting write buffer size to smaller value would reduce this problem however the real cause is fragmentation xfs bmap shows that each log file has thousands of fragments or blocks each edit it s in byte blocks even though filesystem uses normal blocks must be just the way this tool shows it in reality each fragment is or blocks xfs bmap log head n log xfs bmap log wc l i ve got even bigger numbers sometimes like fragments per mb file this is quite unlike sst files xfs bmap sst sst sst sst i understand that it happens due to the way log files are written and synced but i wonder if it s possible to call fallocate or posix fallocate when creating log file supplying write buffer size as maximum size this should force filesystem to preallocate continuous space for this file on disk and avoid this problem at least on and xfs naturally it won t help cow filesystems i think it might solve this problem however it s hard to predict how it would affect performance of synced writes to this log in general btw i think that rocksdb should do this out of the box though it needs checking if it would actually work as expected here i read that there were some plans to support that in the future in addition to leveldb
1
233,274
7,695,900,278
IssuesEvent
2018-05-18 13:48:31
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[studio-ui] Adding a remote repository to a site, then pushing to the remote does not work
bug priority: high
### Expected behavior Adding a remote repository to a site (a bare git repo), then pushing to the remote should work. ### Actual behavior Adding a remote repository to a site, then pushing to the remote does not work ### Steps to reproduce the problem * Create a site using website_editorial * Create a bare git repo * Go to **Site Config** -> **Remote Repositories** * Click on **New Repository**, fill in the fields using the bare git repo created in the previous step * Click on the **Push** icon, notice that it does not do anything and there is nothing in the logs too, to indicate if something went wrong or some other thing. If the remote repo is bare, the branches of the remote will be null, we should return the default configured branch for Studio, so when the user clicks on the **Push** icon, it should list that configured branch ### Log/stack trace (use https://gist.github.com) Here's a short clip showing the remote being added then trying to push: https://www.useloom.com/share/30fad0f95104492896751f0fe39bcf3a ### Specs #### Version Studio Version Number: 3.0.12-SNAPSHOT-86b9d6 Build Number: 86b9d65008887a0fe2dfa085127d68ed66442515 Build Date/Time: 05-14-2018 13:55:05 -0400 #### OS OS X #### Browser Chrome browser
1.0
[studio-ui] Adding a remote repository to a site, then pushing to the remote does not work - ### Expected behavior Adding a remote repository to a site (a bare git repo), then pushing to the remote should work. ### Actual behavior Adding a remote repository to a site, then pushing to the remote does not work ### Steps to reproduce the problem * Create a site using website_editorial * Create a bare git repo * Go to **Site Config** -> **Remote Repositories** * Click on **New Repository**, fill in the fields using the bare git repo created in the previous step * Click on the **Push** icon, notice that it does not do anything and there is nothing in the logs too, to indicate if something went wrong or some other thing. If the remote repo is bare, the branches of the remote will be null, we should return the default configured branch for Studio, so when the user clicks on the **Push** icon, it should list that configured branch ### Log/stack trace (use https://gist.github.com) Here's a short clip showing the remote being added then trying to push: https://www.useloom.com/share/30fad0f95104492896751f0fe39bcf3a ### Specs #### Version Studio Version Number: 3.0.12-SNAPSHOT-86b9d6 Build Number: 86b9d65008887a0fe2dfa085127d68ed66442515 Build Date/Time: 05-14-2018 13:55:05 -0400 #### OS OS X #### Browser Chrome browser
priority
adding a remote repository to a site then pushing to the remote does not work expected behavior adding a remote repository to a site a bare git repo then pushing to the remote should work actual behavior adding a remote repository to a site then pushing to the remote does not work steps to reproduce the problem create a site using website editorial create a bare git repo go to site config remote repositories click on new repository fill in the fields using the bare git repo created in the previous step click on the push icon notice that it does not do anything and there is nothing in the logs too to indicate if something went wrong or some other thing if the remote repo is bare the branches of the remote will be null we should return the default configured branch for studio so when the user clicks on the push icon it should list that configured branch log stack trace use here s a short clip showing the remote being added then trying to push specs version studio version number snapshot build number build date time os os x browser chrome browser
1
637,730
20,676,309,872
IssuesEvent
2022-03-10 09:37:12
dmwm/WMCore
https://api.github.com/repos/dmwm/WMCore
opened
MSUnmerged: Unordered checks for Rucio Consistency Monitor record age
BUG High Priority MSUnmerged
**Impact of the bug** MSUnmerged **Describe the bug** While looking at the service logs today, I have found a strange case: [1]. The explanation is quite simple. The RSE is not in status `done` at Rucio Consistency Monitor and hence the field at `self.rseConsStats[rseName]['end_time'] ` is `None`: [2]. While this is foreseen to happen and we do handle the situation few lines bellow the problematic [one](https://github.com/dmwm/WMCore/blob/80212e6765ed460ac45c5f8de0fb2d672a51f735/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py#L501-L508), it is the early creation of the boolean flag `isConsNewer` at line [483](https://github.com/dmwm/WMCore/blob/80212e6765ed460ac45c5f8de0fb2d672a51f735/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py#L483) that triggers the exception. The end result is not fatal, though. The RSE is still skipped as expected, but it is only that it exits the pipeline with a general exception instead of the normal way as it should be [3], and the timestamps of the RSE are not updated at MongoDB. Since this is an RSE with already stale record, once its status is changed to `done` in Rucio Consistency Monitor it will be safely retried in MSUnmerged. So no harm is done so far. But we better fix that behavior. **How to reproduce it** Try to run the MSUnmerged in standalone mode against an RSE which is not in status `done` at Rucio Consistency Monitor. **Expected behavior** Exit the pipeline through the normal mechanism and log the latest timestamps for this RSE in MongoDB. **Additional context and error message** [1] ``` 2022-03-10 10:05:32,698:ERROR:MSUnmerged:_execute(): plineUnmerged: General error from pipeline. RSE: T2_US_Vanderbilt. Error: '>' not supported between instances of 'NoneType' and 'float' Will retry again in the next cycle. Traceback (most recent call last): File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py", line 269, in _execute pline.run(MSUnmergedRSE(rseName)) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 140, in run return reduce(lambda obj, functor: functor(obj), self.funcLine, obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 140, in <lambda> return reduce(lambda obj, functor: functor(obj), self.funcLine, obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 72, in __call__ return self.run(obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 75, in run return self.func(obj, *self.args, **self.kwargs) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py", line 487, in consRecordAge isConsNewer = self.rseConsStats[rseName]['end_time'] > self.rseTimestamps[rseName]['prevStartTime'] TypeError: '>' not supported between instances of 'NoneType' and 'float' ``` [2] ``` 2022-03-10 10:05:32,698:INFO:MSUnmerged:consRecordAge(): rseConsStats[T2_US_Vanderbilt]: {'end_time': None, 'files': None, 'roots': [], 'rse': 'T2_US_Vanderbilt', 'run': '2022_03_07_08_19', 'scanner': {'type': 'xrootd', 'version': '2.0'}, 'scanning': {'ignore_subdirectories': ['logs', 'SAM/testSRM'], 'max_scanners': 8, 'recursive_threshold': 1, 'root': 'unmerged', 'start_time': 1646641162.9798942, 'timeout': 180}, 'server': 'xrootd-vanderbilt.sites.opensciencegrid.org', 'server_root': '/store', 'start_time': 1646641162.9397197, 'status': 'started', 'total_size_gb': None} 2022-03-10 10:05:32,698:INFO:MSUnmerged:consRecordAge(): rseTimestamps[T2_US_Vanderbilt]: {'endTime': 0.0, 'prevEndTime': 0.0, 'prevStartTime': 0.0, 'rseConsStatTime': 0.0, 'startTime': 1646903132.698025} ``` [3] ``` 2022-03-10 10:11:45,262:WARNING:MSUnmerged:_execute(): plineUnmerged: Run on RSE: T2_US_Vanderbilt was interrupted due to: MSUnmergedException: MSUnmergedPlineExit: RSE: T2_US_Vanderbilt In non-final state in Rucio Consistency Monitor. Skipping it in the current run. Will retry again in the next cycle. ```
1.0
MSUnmerged: Unordered checks for Rucio Consistency Monitor record age - **Impact of the bug** MSUnmerged **Describe the bug** While looking at the service logs today, I have found a strange case: [1]. The explanation is quite simple. The RSE is not in status `done` at Rucio Consistency Monitor and hence the field at `self.rseConsStats[rseName]['end_time'] ` is `None`: [2]. While this is foreseen to happen and we do handle the situation few lines bellow the problematic [one](https://github.com/dmwm/WMCore/blob/80212e6765ed460ac45c5f8de0fb2d672a51f735/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py#L501-L508), it is the early creation of the boolean flag `isConsNewer` at line [483](https://github.com/dmwm/WMCore/blob/80212e6765ed460ac45c5f8de0fb2d672a51f735/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py#L483) that triggers the exception. The end result is not fatal, though. The RSE is still skipped as expected, but it is only that it exits the pipeline with a general exception instead of the normal way as it should be [3], and the timestamps of the RSE are not updated at MongoDB. Since this is an RSE with already stale record, once its status is changed to `done` in Rucio Consistency Monitor it will be safely retried in MSUnmerged. So no harm is done so far. But we better fix that behavior. **How to reproduce it** Try to run the MSUnmerged in standalone mode against an RSE which is not in status `done` at Rucio Consistency Monitor. **Expected behavior** Exit the pipeline through the normal mechanism and log the latest timestamps for this RSE in MongoDB. **Additional context and error message** [1] ``` 2022-03-10 10:05:32,698:ERROR:MSUnmerged:_execute(): plineUnmerged: General error from pipeline. RSE: T2_US_Vanderbilt. Error: '>' not supported between instances of 'NoneType' and 'float' Will retry again in the next cycle. Traceback (most recent call last): File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py", line 269, in _execute pline.run(MSUnmergedRSE(rseName)) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 140, in run return reduce(lambda obj, functor: functor(obj), self.funcLine, obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 140, in <lambda> return reduce(lambda obj, functor: functor(obj), self.funcLine, obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 72, in __call__ return self.run(obj) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/Utils/Pipeline.py", line 75, in run return self.func(obj, *self.args, **self.kwargs) File "/afs/cern.ch/user/t/tivanov/WMCoreDev.d/WMCore/src/python/WMCore/MicroService/MSUnmerged/MSUnmerged.py", line 487, in consRecordAge isConsNewer = self.rseConsStats[rseName]['end_time'] > self.rseTimestamps[rseName]['prevStartTime'] TypeError: '>' not supported between instances of 'NoneType' and 'float' ``` [2] ``` 2022-03-10 10:05:32,698:INFO:MSUnmerged:consRecordAge(): rseConsStats[T2_US_Vanderbilt]: {'end_time': None, 'files': None, 'roots': [], 'rse': 'T2_US_Vanderbilt', 'run': '2022_03_07_08_19', 'scanner': {'type': 'xrootd', 'version': '2.0'}, 'scanning': {'ignore_subdirectories': ['logs', 'SAM/testSRM'], 'max_scanners': 8, 'recursive_threshold': 1, 'root': 'unmerged', 'start_time': 1646641162.9798942, 'timeout': 180}, 'server': 'xrootd-vanderbilt.sites.opensciencegrid.org', 'server_root': '/store', 'start_time': 1646641162.9397197, 'status': 'started', 'total_size_gb': None} 2022-03-10 10:05:32,698:INFO:MSUnmerged:consRecordAge(): rseTimestamps[T2_US_Vanderbilt]: {'endTime': 0.0, 'prevEndTime': 0.0, 'prevStartTime': 0.0, 'rseConsStatTime': 0.0, 'startTime': 1646903132.698025} ``` [3] ``` 2022-03-10 10:11:45,262:WARNING:MSUnmerged:_execute(): plineUnmerged: Run on RSE: T2_US_Vanderbilt was interrupted due to: MSUnmergedException: MSUnmergedPlineExit: RSE: T2_US_Vanderbilt In non-final state in Rucio Consistency Monitor. Skipping it in the current run. Will retry again in the next cycle. ```
priority
msunmerged unordered checks for rucio consistency monitor record age impact of the bug msunmerged describe the bug while looking at the service logs today i have found a strange case the explanation is quite simple the rse is not in status done at rucio consistency monitor and hence the field at self rseconsstats is none while this is foreseen to happen and we do handle the situation few lines bellow the problematic it is the early creation of the boolean flag isconsnewer at line that triggers the exception the end result is not fatal though the rse is still skipped as expected but it is only that it exits the pipeline with a general exception instead of the normal way as it should be and the timestamps of the rse are not updated at mongodb since this is an rse with already stale record once its status is changed to done in rucio consistency monitor it will be safely retried in msunmerged so no harm is done so far but we better fix that behavior how to reproduce it try to run the msunmerged in standalone mode against an rse which is not in status done at rucio consistency monitor expected behavior exit the pipeline through the normal mechanism and log the latest timestamps for this rse in mongodb additional context and error message error msunmerged execute plineunmerged general error from pipeline rse us vanderbilt error not supported between instances of nonetype and float will retry again in the next cycle traceback most recent call last file afs cern ch user t tivanov wmcoredev d wmcore src python wmcore microservice msunmerged msunmerged py line in execute pline run msunmergedrse rsename file afs cern ch user t tivanov wmcoredev d wmcore src python utils pipeline py line in run return reduce lambda obj functor functor obj self funcline obj file afs cern ch user t tivanov wmcoredev d wmcore src python utils pipeline py line in return reduce lambda obj functor functor obj self funcline obj file afs cern ch user t tivanov wmcoredev d wmcore src python utils pipeline py line in call return self run obj file afs cern ch user t tivanov wmcoredev d wmcore src python utils pipeline py line in run return self func obj self args self kwargs file afs cern ch user t tivanov wmcoredev d wmcore src python wmcore microservice msunmerged msunmerged py line in consrecordage isconsnewer self rseconsstats self rsetimestamps typeerror not supported between instances of nonetype and float info msunmerged consrecordage rseconsstats end time none files none roots rse us vanderbilt run scanner type xrootd version scanning ignore subdirectories max scanners recursive threshold root unmerged start time timeout server xrootd vanderbilt sites opensciencegrid org server root store start time status started total size gb none info msunmerged consrecordage rsetimestamps endtime prevendtime prevstarttime rseconsstattime starttime warning msunmerged execute plineunmerged run on rse us vanderbilt was interrupted due to msunmergedexception msunmergedplineexit rse us vanderbilt in non final state in rucio consistency monitor skipping it in the current run will retry again in the next cycle
1
8,808
2,605,238,656
IssuesEvent
2015-02-25 05:03:33
neocities/neocities
https://api.github.com/repos/neocities/neocities
closed
Improve site profile header
high priority
- There are only two [big number, small label] elements in the header since we don't have tipping yet and it looks awkward. Maybe add number of updates here. - "0 follower" should be "0 followers" - Share links bring up the mobile versions of the sites when I'm on desktop. The Twitter link doesn't open in a new window.
1.0
Improve site profile header - - There are only two [big number, small label] elements in the header since we don't have tipping yet and it looks awkward. Maybe add number of updates here. - "0 follower" should be "0 followers" - Share links bring up the mobile versions of the sites when I'm on desktop. The Twitter link doesn't open in a new window.
priority
improve site profile header there are only two elements in the header since we don t have tipping yet and it looks awkward maybe add number of updates here follower should be followers share links bring up the mobile versions of the sites when i m on desktop the twitter link doesn t open in a new window
1
140,252
5,399,325,035
IssuesEvent
2017-02-27 19:10:27
vmware/vic
https://api.github.com/repos/vmware/vic
closed
Failed to write to image store / http: panic serving <ip:port> runtime error: invalid memory address or nil pointer dereference when pulling docker image
component/imagec kind/bug kind/customer-found priority/high
Running VIC GA, trying to pull docker image of any significant size root@hostname:~# docker -l debug -H IP:2376 --tls pull mysql Using default tag: latest Pulling from library/mysql 5040bd298390: Already exists a3ed95caeb02: Pull complete 55370df68315: Pull complete fad5195d69cc: Pull complete a1034a5fbbfc: Extracting [==================================================>] 114 B/114 B 17f3570b42ae: Download complete 6bf4b16e5339: Download complete 9700c9731729: Download complete f2fea9c5b632: Download complete 2f8101f5336d: Download complete 0dc8f8a1031a: Download complete a1b9627588c7: Download complete Failed to write to image store: Post http://127.0.0.1:2377/storage/42110c2c-b135-9684-da61-3c048bc6741b?image_id=cdffeafa2fc29fe77b2d2160c9aa6afec302ec8c8aad809e519a0c82082bd2d0&metadatakey=metaData&metadataval=%7B%22id%22%3A%22cdffeafa2fc29fe77b2d2160c9aa6afec302ec8c8aad809e519a0c82082bd2d0%22%2C%22parent%22%3A%22c91cff46a946e10efe9b4ae882a8165881cd65920ecaca79b4a8c8bb576970ca%22%2C%22created%22%3A%222017-01-17T17%3A16%3A36.095838769Z%22%2C%22container_config%22%3A%7B%22Cmd%22%3A%5B%22%2Fbin%2Fsh+-c+mkdir+%2Fdocker-entrypoint-initdb.d%22%5D%7D%7D&parent_id=c91cff46a946e10efe9b4ae882a8165881cd65920ecaca79b4a8c8bb576970ca&sum=sha256%3Aa1034a5fbbfc91073603066f2801c0705ed3b2ac81976f560b4854a5a1262f29: read tcp 127.0.0.1:57586->127.0.0.1:2377: read: connection reset by peer **Details:** Attaching port-layer log. [port-layer - redacted.txt](https://github.com/vmware/vic/files/758258/port-layer.-.redacted.txt) ``` time=2017-02-03T18:40:00.387416003Z level=debug msg=op=250.206 (delta:38.982µs): Getting image 462d7794600a87463c4e2f905b4e026c6267da74aa906a75254611b919f35c3b from http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.387486147Z level=debug msg=op=250.206 (delta:108.956µs): Getting image 47b860004a0c5ad9434d93b93f69309057c1e532835256c2426ad02786791e0d from http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.387537976Z level=debug msg=op=250.206 (delta:160.974µs): Image 47b860004a0c5ad9434d93b93f69309057c1e532835256c2426ad02786791e0d not in cache, retreiving from datastore time=2017-02-03T18:40:00.387565933Z level=debug msg=[BEGIN] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:445] http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.421307222Z level=debug msg=[ END ] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:445] [33.737813ms] http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time="2017-02-03T18:40:00Z" level=info msg="Saving parent map (42110c2c-b135-9684-da61-3c048bc6741b/parentMap)" time="2017-02-03T18:40:00Z" level=info msg="Moving [TEST-VMFS02] VCHtest/VIC/42110c2c-b135-9684-da61-3c048bc6741b/parentMap.tmp to [TEST-VMFS02] VCHtest/VIC/42110c2c-b135-9684-da61-3c048bc6741b/parentMap" 2017/02/03 18:40:00 http: panic serving IP:58380: runtime error: invalid memory address or nil pointer dereference goroutine 66053 [running]: net/http.(*conn).serve.func1(0xc4209eb580) /usr/local/go/src/net/http/server.go:1491 +0x12a panic(0xec2640, 0xc420010050) /usr/local/go/src/runtime/panic.go:458 +0x243 github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.(*taskCallback).fn(0xc4205d73c0, 0x0, 0x0, 0x0, 0xc4205d77e0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:74 +0x29f github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.(*taskCallback).(github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.fn)-fm(0x0, 0x0, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:121 +0x48 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.Wait.func1(0xc420a75b48, 0x4, 0xc420a75b60, 0xa, 0x0, 0x0, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:69 +0x43 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.waitLoop(0x1ac0520, 0xc4207b6a40, 0xc420707d10, 0xc420c28e68, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:142 +0x1e5 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.Wait(0x1ac0520, 0xc4207b6a40, 0xc420c28fd8, 0xc420a751b0, 0x4, 0xc420a751d0, 0xa, 0xc420a75200, 0x1, 0x1, ...) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:70 +0x30a github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.Wait(0x1ac0520, 0xc4207b6a40, 0xc420a751b0, 0x4, 0xc420a751d0, 0xa, 0xc420c28fd8, 0x0, 0x0, 0x0, ...) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:121 +0x152 github.com/vmware/vic/vendor/github.com/vmware/govmomi/object.(*Task).WaitForResult(0xc4209a61c0, 0x1ac0520, 0xc4207b6a40, 0x0, 0x0, 0x0, 0x1, 0xc42062f090) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/object/task.go:52 +0x135 github.com/vmware/vic/pkg/vsphere/tasks.WaitForResult(0x1ac0520, 0xc4207b6a40, 0xc420c29258, 0xc420c291f8, 0x2, 0x2) /drone/src/github.com/vmware/vic/pkg/vsphere/tasks/waiter.go:67 +0x2ca github.com/vmware/vic/pkg/vsphere/tasks.Wait(0x1ac0520, 0xc4207b6a40, 0xc420c29258, 0x2, 0x2) /drone/src/github.com/vmware/vic/pkg/vsphere/tasks/waiter.go:49 +0x3f github.com/vmware/vic/pkg/vsphere/datastore.(*Helper).Mv(0xc420297d10, 0x1ac0520, 0xc4207b6a40, 0xc4207b6880, 0x32, 0xc4203a1b30, 0x2e, 0x0, 0x0) /drone/src/github.com/vmware/vic/pkg/vsphere/datastore/datastore.go:239 +0x317 github.com/vmware/vic/lib/portlayer/storage/vsphere.(*parentM).Save(0xc4203a1ad0, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0x0, 0x0) /drone/src/github.com/vmware/vic/lib/portlayer/storage/vsphere/parent.go:110 +0x4e3 github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).WriteImage(0xc4202d0200, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0xc4203a0cf0, 0xc4209def3c, ...) /drone/src/github.com/vmware/vic/lib/portlayer/storage/vsphere/image.go:249 +0x16f github.com/vmware/vic/lib/portlayer/storage.(*NameLookupCache).WriteImage(0xc4202d0240, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0xc420c297c8, 0xc4209def3c, ...) /drone/src/github.com/vmware/vic/lib/portlayer/storage/image_cache.go:218 +0x1f3 github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).WriteImage(0xc420445450, 0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers/storage_handlers.go:277 +0x3dc github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).WriteImage-fm(0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, 0xc4209def0e, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers/storage_handlers.go:107 +0x65 github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage.WriteImageHandlerFunc.Handle(0xc42085b610, 0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage/write_image.go:17 +0x59 github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage.(*WriteImage).ServeHTTP(0xc4208432c0, 0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage/write_image.go:51 +0x276 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.newOperationExecutor.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/operation.go:23 +0x69 net/http.HandlerFunc.ServeHTTP(0xc42085bc30, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.newRouter.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/router.go:78 +0x555 net/http.HandlerFunc.ServeHTTP(0xc4205cd700, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.specMiddleware.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/spec.go:36 +0x7b net/http.HandlerFunc.ServeHTTP(0xc42050ebe0, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 net/http.serverHandler.ServeHTTP(0xc42024ee00, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:2202 +0x7d net/http.(*conn).serve(0xc4209eb580, 0x1abeda0, 0xc420b33440) /usr/local/go/src/net/http/server.go:1579 +0x4b7 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2293 +0x44d ``` Reproducable always. Tried recreating VCH VM from scratch. Same problem. I cannot reliably pull a docker image. Always happens during the extract. Plenty of storage available. No network issues otherwise.
1.0
Failed to write to image store / http: panic serving <ip:port> runtime error: invalid memory address or nil pointer dereference when pulling docker image - Running VIC GA, trying to pull docker image of any significant size root@hostname:~# docker -l debug -H IP:2376 --tls pull mysql Using default tag: latest Pulling from library/mysql 5040bd298390: Already exists a3ed95caeb02: Pull complete 55370df68315: Pull complete fad5195d69cc: Pull complete a1034a5fbbfc: Extracting [==================================================>] 114 B/114 B 17f3570b42ae: Download complete 6bf4b16e5339: Download complete 9700c9731729: Download complete f2fea9c5b632: Download complete 2f8101f5336d: Download complete 0dc8f8a1031a: Download complete a1b9627588c7: Download complete Failed to write to image store: Post http://127.0.0.1:2377/storage/42110c2c-b135-9684-da61-3c048bc6741b?image_id=cdffeafa2fc29fe77b2d2160c9aa6afec302ec8c8aad809e519a0c82082bd2d0&metadatakey=metaData&metadataval=%7B%22id%22%3A%22cdffeafa2fc29fe77b2d2160c9aa6afec302ec8c8aad809e519a0c82082bd2d0%22%2C%22parent%22%3A%22c91cff46a946e10efe9b4ae882a8165881cd65920ecaca79b4a8c8bb576970ca%22%2C%22created%22%3A%222017-01-17T17%3A16%3A36.095838769Z%22%2C%22container_config%22%3A%7B%22Cmd%22%3A%5B%22%2Fbin%2Fsh+-c+mkdir+%2Fdocker-entrypoint-initdb.d%22%5D%7D%7D&parent_id=c91cff46a946e10efe9b4ae882a8165881cd65920ecaca79b4a8c8bb576970ca&sum=sha256%3Aa1034a5fbbfc91073603066f2801c0705ed3b2ac81976f560b4854a5a1262f29: read tcp 127.0.0.1:57586->127.0.0.1:2377: read: connection reset by peer **Details:** Attaching port-layer log. [port-layer - redacted.txt](https://github.com/vmware/vic/files/758258/port-layer.-.redacted.txt) ``` time=2017-02-03T18:40:00.387416003Z level=debug msg=op=250.206 (delta:38.982µs): Getting image 462d7794600a87463c4e2f905b4e026c6267da74aa906a75254611b919f35c3b from http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.387486147Z level=debug msg=op=250.206 (delta:108.956µs): Getting image 47b860004a0c5ad9434d93b93f69309057c1e532835256c2426ad02786791e0d from http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.387537976Z level=debug msg=op=250.206 (delta:160.974µs): Image 47b860004a0c5ad9434d93b93f69309057c1e532835256c2426ad02786791e0d not in cache, retreiving from datastore time=2017-02-03T18:40:00.387565933Z level=debug msg=[BEGIN] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:445] http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time=2017-02-03T18:40:00.421307222Z level=debug msg=[ END ] [github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).GetImage:445] [33.737813ms] http://VCHtest/storage/images/42110c2c-b135-9684-da61-3c048bc6741b time="2017-02-03T18:40:00Z" level=info msg="Saving parent map (42110c2c-b135-9684-da61-3c048bc6741b/parentMap)" time="2017-02-03T18:40:00Z" level=info msg="Moving [TEST-VMFS02] VCHtest/VIC/42110c2c-b135-9684-da61-3c048bc6741b/parentMap.tmp to [TEST-VMFS02] VCHtest/VIC/42110c2c-b135-9684-da61-3c048bc6741b/parentMap" 2017/02/03 18:40:00 http: panic serving IP:58380: runtime error: invalid memory address or nil pointer dereference goroutine 66053 [running]: net/http.(*conn).serve.func1(0xc4209eb580) /usr/local/go/src/net/http/server.go:1491 +0x12a panic(0xec2640, 0xc420010050) /usr/local/go/src/runtime/panic.go:458 +0x243 github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.(*taskCallback).fn(0xc4205d73c0, 0x0, 0x0, 0x0, 0xc4205d77e0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:74 +0x29f github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.(*taskCallback).(github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.fn)-fm(0x0, 0x0, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:121 +0x48 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.Wait.func1(0xc420a75b48, 0x4, 0xc420a75b60, 0xa, 0x0, 0x0, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:69 +0x43 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.waitLoop(0x1ac0520, 0xc4207b6a40, 0xc420707d10, 0xc420c28e68, 0x0, 0x0) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:142 +0x1e5 github.com/vmware/vic/vendor/github.com/vmware/govmomi/property.Wait(0x1ac0520, 0xc4207b6a40, 0xc420c28fd8, 0xc420a751b0, 0x4, 0xc420a751d0, 0xa, 0xc420a75200, 0x1, 0x1, ...) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/property/wait.go:70 +0x30a github.com/vmware/vic/vendor/github.com/vmware/govmomi/task.Wait(0x1ac0520, 0xc4207b6a40, 0xc420a751b0, 0x4, 0xc420a751d0, 0xa, 0xc420c28fd8, 0x0, 0x0, 0x0, ...) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/task/wait.go:121 +0x152 github.com/vmware/vic/vendor/github.com/vmware/govmomi/object.(*Task).WaitForResult(0xc4209a61c0, 0x1ac0520, 0xc4207b6a40, 0x0, 0x0, 0x0, 0x1, 0xc42062f090) /drone/src/github.com/vmware/vic/vendor/github.com/vmware/govmomi/object/task.go:52 +0x135 github.com/vmware/vic/pkg/vsphere/tasks.WaitForResult(0x1ac0520, 0xc4207b6a40, 0xc420c29258, 0xc420c291f8, 0x2, 0x2) /drone/src/github.com/vmware/vic/pkg/vsphere/tasks/waiter.go:67 +0x2ca github.com/vmware/vic/pkg/vsphere/tasks.Wait(0x1ac0520, 0xc4207b6a40, 0xc420c29258, 0x2, 0x2) /drone/src/github.com/vmware/vic/pkg/vsphere/tasks/waiter.go:49 +0x3f github.com/vmware/vic/pkg/vsphere/datastore.(*Helper).Mv(0xc420297d10, 0x1ac0520, 0xc4207b6a40, 0xc4207b6880, 0x32, 0xc4203a1b30, 0x2e, 0x0, 0x0) /drone/src/github.com/vmware/vic/pkg/vsphere/datastore/datastore.go:239 +0x317 github.com/vmware/vic/lib/portlayer/storage/vsphere.(*parentM).Save(0xc4203a1ad0, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0x0, 0x0) /drone/src/github.com/vmware/vic/lib/portlayer/storage/vsphere/parent.go:110 +0x4e3 github.com/vmware/vic/lib/portlayer/storage/vsphere.(*ImageStore).WriteImage(0xc4202d0200, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0xc4203a0cf0, 0xc4209def3c, ...) /drone/src/github.com/vmware/vic/lib/portlayer/storage/vsphere/image.go:249 +0x16f github.com/vmware/vic/lib/portlayer/storage.(*NameLookupCache).WriteImage(0xc4202d0240, 0x1abee60, 0xc4203a0c30, 0xc420853980, 0x1, 0x1, 0xc42074f050, 0x7, 0xc420c297c8, 0xc4209def3c, ...) /drone/src/github.com/vmware/vic/lib/portlayer/storage/image_cache.go:218 +0x1f3 github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).WriteImage(0xc420445450, 0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers/storage_handlers.go:277 +0x3dc github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers.(*StorageHandlersImpl).WriteImage-fm(0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, 0xc4209def0e, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/handlers/storage_handlers.go:107 +0x65 github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage.WriteImageHandlerFunc.Handle(0xc42085b610, 0xc420b19860, 0x1abc0e0, 0xc420853840, 0xc4209def3c, 0x40, 0xc42074efc0, 0xc42074efd0, 0xc4209df163, 0x40, ...) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage/write_image.go:17 +0x59 github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage.(*WriteImage).ServeHTTP(0xc4208432c0, 0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/lib/apiservers/portlayer/restapi/operations/storage/write_image.go:51 +0x276 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.newOperationExecutor.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/operation.go:23 +0x69 net/http.HandlerFunc.ServeHTTP(0xc42085bc30, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.newRouter.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/router.go:78 +0x555 net/http.HandlerFunc.ServeHTTP(0xc4205cd700, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware.specMiddleware.func1(0x1abe220, 0xc420f9add0, 0xc420b19860) /drone/src/github.com/vmware/vic/vendor/github.com/go-swagger/go-swagger/httpkit/middleware/spec.go:36 +0x7b net/http.HandlerFunc.ServeHTTP(0xc42050ebe0, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:1726 +0x44 net/http.serverHandler.ServeHTTP(0xc42024ee00, 0x1abe220, 0xc420f9add0, 0xc420b19860) /usr/local/go/src/net/http/server.go:2202 +0x7d net/http.(*conn).serve(0xc4209eb580, 0x1abeda0, 0xc420b33440) /usr/local/go/src/net/http/server.go:1579 +0x4b7 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2293 +0x44d ``` Reproducable always. Tried recreating VCH VM from scratch. Same problem. I cannot reliably pull a docker image. Always happens during the extract. Plenty of storage available. No network issues otherwise.
priority
failed to write to image store http panic serving runtime error invalid memory address or nil pointer dereference when pulling docker image running vic ga trying to pull docker image of any significant size root hostname docker l debug h ip tls pull mysql using default tag latest pulling from library mysql already exists pull complete pull complete pull complete extracting b b download complete download complete download complete download complete download complete download complete download complete failed to write to image store post read tcp read connection reset by peer details attaching port layer log time level debug msg op delta getting image from time level debug msg op delta getting image from time level debug msg op delta image not in cache retreiving from datastore time level debug msg time level debug msg time level info msg saving parent map parentmap time level info msg moving vchtest vic parentmap tmp to vchtest vic parentmap http panic serving ip runtime error invalid memory address or nil pointer dereference goroutine net http conn serve usr local go src net http server go panic usr local go src runtime panic go github com vmware vic vendor github com vmware govmomi task taskcallback fn drone src github com vmware vic vendor github com vmware govmomi task wait go github com vmware vic vendor github com vmware govmomi task taskcallback github com vmware vic vendor github com vmware govmomi task fn fm drone src github com vmware vic vendor github com vmware govmomi task wait go github com vmware vic vendor github com vmware govmomi property wait drone src github com vmware vic vendor github com vmware govmomi property wait go github com vmware vic vendor github com vmware govmomi property waitloop drone src github com vmware vic vendor github com vmware govmomi property wait go github com vmware vic vendor github com vmware govmomi property wait drone src github com vmware vic vendor github com vmware govmomi property wait go github com vmware vic vendor github com vmware govmomi task wait drone src github com vmware vic vendor github com vmware govmomi task wait go github com vmware vic vendor github com vmware govmomi object task waitforresult drone src github com vmware vic vendor github com vmware govmomi object task go github com vmware vic pkg vsphere tasks waitforresult drone src github com vmware vic pkg vsphere tasks waiter go github com vmware vic pkg vsphere tasks wait drone src github com vmware vic pkg vsphere tasks waiter go github com vmware vic pkg vsphere datastore helper mv drone src github com vmware vic pkg vsphere datastore datastore go github com vmware vic lib portlayer storage vsphere parentm save drone src github com vmware vic lib portlayer storage vsphere parent go github com vmware vic lib portlayer storage vsphere imagestore writeimage drone src github com vmware vic lib portlayer storage vsphere image go github com vmware vic lib portlayer storage namelookupcache writeimage drone src github com vmware vic lib portlayer storage image cache go github com vmware vic lib apiservers portlayer restapi handlers storagehandlersimpl writeimage drone src github com vmware vic lib apiservers portlayer restapi handlers storage handlers go github com vmware vic lib apiservers portlayer restapi handlers storagehandlersimpl writeimage fm drone src github com vmware vic lib apiservers portlayer restapi handlers storage handlers go github com vmware vic lib apiservers portlayer restapi operations storage writeimagehandlerfunc handle drone src github com vmware vic lib apiservers portlayer restapi operations storage write image go github com vmware vic lib apiservers portlayer restapi operations storage writeimage servehttp drone src github com vmware vic lib apiservers portlayer restapi operations storage write image go github com vmware vic vendor github com go swagger go swagger httpkit middleware newoperationexecutor drone src github com vmware vic vendor github com go swagger go swagger httpkit middleware operation go net http handlerfunc servehttp usr local go src net http server go github com vmware vic vendor github com go swagger go swagger httpkit middleware newrouter drone src github com vmware vic vendor github com go swagger go swagger httpkit middleware router go net http handlerfunc servehttp usr local go src net http server go github com vmware vic vendor github com go swagger go swagger httpkit middleware specmiddleware drone src github com vmware vic vendor github com go swagger go swagger httpkit middleware spec go net http handlerfunc servehttp usr local go src net http server go net http serverhandler servehttp usr local go src net http server go net http conn serve usr local go src net http server go created by net http server serve usr local go src net http server go reproducable always tried recreating vch vm from scratch same problem i cannot reliably pull a docker image always happens during the extract plenty of storage available no network issues otherwise
1
326,066
9,942,388,009
IssuesEvent
2019-07-03 13:48:08
python/mypy
https://api.github.com/repos/python/mypy
closed
New Semantic Analyzer: INTERNAL ERROR on aiohttp
crash new-semantic-analyzer priority-0-high topic-attrs
This is only a potential bug report, as I found that upgrading the triggering library solves the issue. When analyzing code with the new semantic analyzer, `mypy` fails processing the `client` module of `aiohttp` (v3.4.4). The difference between the affected version and the latest code is how they annotate the dataclasses using `attr.ib`. In v3.4.4 they have `total = attr.ib(type=float, default=None)`, while on v3.5.4 they use `total = attr.ib(type=Optional[float], default=None)`. Compare https://github.com/aio-libs/aiohttp/blob/v3.4.4/aiohttp/client.py#L49-L53 and https://github.com/aio-libs/aiohttp/blob/v3.5.4/aiohttp/client.py#L132-L136 Below is the output of the run with the latest development version ``` /Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/aiohttp/client.py:49: error: INTERNAL ERROR -- Please try using mypy master on Github: https://mypy.rtfd.io/en/latest/common_issues.html#using-a-development-mypy-build Please report a bug at https://github.com/python/mypy/issues version: 0.720+dev.262fe3dc4d5d14492c6fd4009d91c555c406ac04 Traceback (most recent call last): File "/Users/skreft/.virtualenvs/zapship/bin/mypy", line 10, in <module> sys.exit(console_entry()) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/__main__.py", line 8, in console_entry main(None, sys.stdout, sys.stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/main.py", line 83, in main res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 164, in build result = _build(sources, options, alt_lib_path, flush_errors, fscache, stdout, stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 224, in _build graph = dispatch(sources, manager, stdout) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2567, in dispatch process_graph(graph, manager) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2880, in process_graph process_stale_scc(graph, scc, manager) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2978, in process_stale_scc semantic_analysis_for_scc(graph, scc, manager.errors) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 74, in semantic_analysis_for_scc process_top_levels(graph, scc, patches) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 193, in process_top_levels patches) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 315, in semantic_analyze_target active_type=active_type) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 360, in refresh_partial self.refresh_top_level(node) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 371, in refresh_top_level self.accept(d) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 4435, in accept node.accept(self) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/nodes.py", line 923, in accept return visitor.visit_class_def(self) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 978, in visit_class_def self.analyze_class(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1049, in analyze_class self.analyze_class_body_common(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1058, in analyze_class_body_common self.apply_class_plugin_hooks(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1103, in apply_class_plugin_hooks hook(ClassDefContext(defn, decorator, self)) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/plugins/attrs.py", line 214, in attr_class_maker_callback if info[attr.name].type is None and not ctx.api.final_iteration: File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/nodes.py", line 2413, in __getitem__ raise KeyError(name) KeyError: 'total' /Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/aiohttp/client.py:49: : note: use --pdb to drop into pdb ```
1.0
New Semantic Analyzer: INTERNAL ERROR on aiohttp - This is only a potential bug report, as I found that upgrading the triggering library solves the issue. When analyzing code with the new semantic analyzer, `mypy` fails processing the `client` module of `aiohttp` (v3.4.4). The difference between the affected version and the latest code is how they annotate the dataclasses using `attr.ib`. In v3.4.4 they have `total = attr.ib(type=float, default=None)`, while on v3.5.4 they use `total = attr.ib(type=Optional[float], default=None)`. Compare https://github.com/aio-libs/aiohttp/blob/v3.4.4/aiohttp/client.py#L49-L53 and https://github.com/aio-libs/aiohttp/blob/v3.5.4/aiohttp/client.py#L132-L136 Below is the output of the run with the latest development version ``` /Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/aiohttp/client.py:49: error: INTERNAL ERROR -- Please try using mypy master on Github: https://mypy.rtfd.io/en/latest/common_issues.html#using-a-development-mypy-build Please report a bug at https://github.com/python/mypy/issues version: 0.720+dev.262fe3dc4d5d14492c6fd4009d91c555c406ac04 Traceback (most recent call last): File "/Users/skreft/.virtualenvs/zapship/bin/mypy", line 10, in <module> sys.exit(console_entry()) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/__main__.py", line 8, in console_entry main(None, sys.stdout, sys.stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/main.py", line 83, in main res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 164, in build result = _build(sources, options, alt_lib_path, flush_errors, fscache, stdout, stderr) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 224, in _build graph = dispatch(sources, manager, stdout) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2567, in dispatch process_graph(graph, manager) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2880, in process_graph process_stale_scc(graph, scc, manager) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/build.py", line 2978, in process_stale_scc semantic_analysis_for_scc(graph, scc, manager.errors) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 74, in semantic_analysis_for_scc process_top_levels(graph, scc, patches) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 193, in process_top_levels patches) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal_main.py", line 315, in semantic_analyze_target active_type=active_type) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 360, in refresh_partial self.refresh_top_level(node) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 371, in refresh_top_level self.accept(d) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 4435, in accept node.accept(self) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/nodes.py", line 923, in accept return visitor.visit_class_def(self) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 978, in visit_class_def self.analyze_class(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1049, in analyze_class self.analyze_class_body_common(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1058, in analyze_class_body_common self.apply_class_plugin_hooks(defn) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/newsemanal/semanal.py", line 1103, in apply_class_plugin_hooks hook(ClassDefContext(defn, decorator, self)) File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/plugins/attrs.py", line 214, in attr_class_maker_callback if info[attr.name].type is None and not ctx.api.final_iteration: File "/Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/mypy/nodes.py", line 2413, in __getitem__ raise KeyError(name) KeyError: 'total' /Users/skreft/.virtualenvs/zapship/lib/python3.6/site-packages/aiohttp/client.py:49: : note: use --pdb to drop into pdb ```
priority
new semantic analyzer internal error on aiohttp this is only a potential bug report as i found that upgrading the triggering library solves the issue when analyzing code with the new semantic analyzer mypy fails processing the client module of aiohttp the difference between the affected version and the latest code is how they annotate the dataclasses using attr ib in they have total attr ib type float default none while on they use total attr ib type optional default none compare and below is the output of the run with the latest development version users skreft virtualenvs zapship lib site packages aiohttp client py error internal error please try using mypy master on github please report a bug at version dev traceback most recent call last file users skreft virtualenvs zapship bin mypy line in sys exit console entry file users skreft virtualenvs zapship lib site packages mypy main py line in console entry main none sys stdout sys stderr file users skreft virtualenvs zapship lib site packages mypy main py line in main res build build sources options none flush errors fscache stdout stderr file users skreft virtualenvs zapship lib site packages mypy build py line in build result build sources options alt lib path flush errors fscache stdout stderr file users skreft virtualenvs zapship lib site packages mypy build py line in build graph dispatch sources manager stdout file users skreft virtualenvs zapship lib site packages mypy build py line in dispatch process graph graph manager file users skreft virtualenvs zapship lib site packages mypy build py line in process graph process stale scc graph scc manager file users skreft virtualenvs zapship lib site packages mypy build py line in process stale scc semantic analysis for scc graph scc manager errors file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal main py line in semantic analysis for scc process top levels graph scc patches file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal main py line in process top levels patches file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal main py line in semantic analyze target active type active type file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in refresh partial self refresh top level node file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in refresh top level self accept d file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in accept node accept self file users skreft virtualenvs zapship lib site packages mypy nodes py line in accept return visitor visit class def self file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in visit class def self analyze class defn file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in analyze class self analyze class body common defn file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in analyze class body common self apply class plugin hooks defn file users skreft virtualenvs zapship lib site packages mypy newsemanal semanal py line in apply class plugin hooks hook classdefcontext defn decorator self file users skreft virtualenvs zapship lib site packages mypy plugins attrs py line in attr class maker callback if info type is none and not ctx api final iteration file users skreft virtualenvs zapship lib site packages mypy nodes py line in getitem raise keyerror name keyerror total users skreft virtualenvs zapship lib site packages aiohttp client py note use pdb to drop into pdb
1
455,582
13,129,307,373
IssuesEvent
2020-08-06 13:42:45
JDesignEra/heartphoria
https://api.github.com/repos/JDesignEra/heartphoria
closed
First-Aid Kit Locations
High Priority backlog
Show the nearest First-aid kit on a map. Acceptance Criteria: + Class attributes includes Latitude & Longtitude.
1.0
First-Aid Kit Locations - Show the nearest First-aid kit on a map. Acceptance Criteria: + Class attributes includes Latitude & Longtitude.
priority
first aid kit locations show the nearest first aid kit on a map acceptance criteria class attributes includes latitude longtitude
1
718,528
24,721,170,837
IssuesEvent
2022-10-20 10:47:44
kubesphere/kubesphere
https://api.github.com/repos/kubesphere/kubesphere
closed
The role 'cluster-admin' can not create ws with this cluster
kind/bug priority/high
**Describe the Bug** There is a environment, have host and member cluster. There is a user test, it's platform is `platform-self-provisioner`,and was invited to cluster host as `cluster-admin` **Versions Used** KubeSphere: `v3.3.1-rc.4` **Environment** How many nodes and their hardware configuration: For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g (and other info are welcomed to help us debugging) **How To Reproduce** Steps to reproduce the behavior: 1. login ks use test 2. create workspace **Expected behavior** this user can select cluster host when create worksapce **Actual behavior** This user can not select any cluster.
1.0
The role 'cluster-admin' can not create ws with this cluster - **Describe the Bug** There is a environment, have host and member cluster. There is a user test, it's platform is `platform-self-provisioner`,and was invited to cluster host as `cluster-admin` **Versions Used** KubeSphere: `v3.3.1-rc.4` **Environment** How many nodes and their hardware configuration: For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g (and other info are welcomed to help us debugging) **How To Reproduce** Steps to reproduce the behavior: 1. login ks use test 2. create workspace **Expected behavior** this user can select cluster host when create worksapce **Actual behavior** This user can not select any cluster.
priority
the role cluster admin can not create ws with this cluster describe the bug there is a environment have host and member cluster there is a user test it s platform is platform self provisioner and was invited to cluster host as cluster admin versions used kubesphere rc environment how many nodes and their hardware configuration for example centos masters nodes and other info are welcomed to help us debugging how to reproduce steps to reproduce the behavior login ks use test create workspace expected behavior this user can select cluster host when create worksapce actual behavior this user can not select any cluster
1
133,382
5,202,071,614
IssuesEvent
2017-01-24 08:13:15
akvo/akvo-web
https://api.github.com/repos/akvo/akvo-web
closed
Slow loading first few seconds
1- Bug 3 - High priority
There are two issues when trying to load the homepage. 1. GET bx_loader.gif takes 4-5 seconds to lookup which results in a 404 ![screen shot 2017-01-10 at 15 28 25](https://cloud.githubusercontent.com/assets/2675262/21809798/78a09692-d749-11e6-8396-68e6d1522992.png) 2. GET admin_ajax.php takes well over 4 seconds to lookup too ![screen shot 2017-01-10 at 15 29 18](https://cloud.githubusercontent.com/assets/2675262/21809829/a7c188d2-d749-11e6-99cc-939ce461ab12.png)
1.0
Slow loading first few seconds - There are two issues when trying to load the homepage. 1. GET bx_loader.gif takes 4-5 seconds to lookup which results in a 404 ![screen shot 2017-01-10 at 15 28 25](https://cloud.githubusercontent.com/assets/2675262/21809798/78a09692-d749-11e6-8396-68e6d1522992.png) 2. GET admin_ajax.php takes well over 4 seconds to lookup too ![screen shot 2017-01-10 at 15 29 18](https://cloud.githubusercontent.com/assets/2675262/21809829/a7c188d2-d749-11e6-99cc-939ce461ab12.png)
priority
slow loading first few seconds there are two issues when trying to load the homepage get bx loader gif takes seconds to lookup which results in a get admin ajax php takes well over seconds to lookup too
1
377,522
11,172,406,528
IssuesEvent
2019-12-29 05:47:50
openmsupply/mobile
https://api.github.com/repos/openmsupply/mobile
opened
Add simple directions to a script item
Effort: small Feature Module: dispensary Priority: high
## Is your feature request related to a problem? Please describe. Cannot currently add any directions to items in a script. ## Describe the solution you'd like Ability to add direections to items in a script! ## Implementation N/A ## Describe alternatives you've considered N/A ## Additional context N/A
1.0
Add simple directions to a script item - ## Is your feature request related to a problem? Please describe. Cannot currently add any directions to items in a script. ## Describe the solution you'd like Ability to add direections to items in a script! ## Implementation N/A ## Describe alternatives you've considered N/A ## Additional context N/A
priority
add simple directions to a script item is your feature request related to a problem please describe cannot currently add any directions to items in a script describe the solution you d like ability to add direections to items in a script implementation n a describe alternatives you ve considered n a additional context n a
1
156,380
5,968,398,670
IssuesEvent
2017-05-30 18:01:55
datproject/dat-node
https://api.github.com/repos/datproject/dat-node
opened
pass url as key (shortnames, tld, etc.)
Priority: High Type: Enhancement
We should be able to pass a url as the key, using https://github.com/joehand/dat-link-resolve. See also: https://github.com/datproject/dat-desktop/issues/241
1.0
pass url as key (shortnames, tld, etc.) - We should be able to pass a url as the key, using https://github.com/joehand/dat-link-resolve. See also: https://github.com/datproject/dat-desktop/issues/241
priority
pass url as key shortnames tld etc we should be able to pass a url as the key using see also
1
462,428
13,247,180,810
IssuesEvent
2020-08-19 16:49:44
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
sanitycheck: incorrect correct calculation of total_skipped when --subset is set:
bug priority: high
Commit e3ff4cfcd655b5a164770c3c293db4206126aba1 causes the following (Negative Number of tests): ``` ./scripts/sanitycheck --build-only --all -T samples/hello_world/ --subset 2/120 Renaming output directory to /home/galak/git/zephyr/sanity-out.12 INFO - Running only a subset: 2/120 INFO - JOBS: 64 INFO - Selecting all possible platforms per test case INFO - Building initial testcase list... INFO - 3 test configurations selected, 0 configurations discarded due to filters. INFO - Adding tasks to the queue... INFO - Total complete: 3/ -4 -75% skipped: 7, failed: 0 INFO - 3 of -4 tests passed (-75.00%), 0 failed, 7 skipped with 0 warnings in 5.17 seconds INFO - In total -4 test cases were executed on 269 out of total 272 platforms (98.90%) INFO - 0 tests executed on platforms, -4 tests were only built. ``` This is due to how total_skipped is calculated -- it needs to factor in subset when set.
1.0
sanitycheck: incorrect correct calculation of total_skipped when --subset is set: - Commit e3ff4cfcd655b5a164770c3c293db4206126aba1 causes the following (Negative Number of tests): ``` ./scripts/sanitycheck --build-only --all -T samples/hello_world/ --subset 2/120 Renaming output directory to /home/galak/git/zephyr/sanity-out.12 INFO - Running only a subset: 2/120 INFO - JOBS: 64 INFO - Selecting all possible platforms per test case INFO - Building initial testcase list... INFO - 3 test configurations selected, 0 configurations discarded due to filters. INFO - Adding tasks to the queue... INFO - Total complete: 3/ -4 -75% skipped: 7, failed: 0 INFO - 3 of -4 tests passed (-75.00%), 0 failed, 7 skipped with 0 warnings in 5.17 seconds INFO - In total -4 test cases were executed on 269 out of total 272 platforms (98.90%) INFO - 0 tests executed on platforms, -4 tests were only built. ``` This is due to how total_skipped is calculated -- it needs to factor in subset when set.
priority
sanitycheck incorrect correct calculation of total skipped when subset is set commit causes the following negative number of tests scripts sanitycheck build only all t samples hello world subset renaming output directory to home galak git zephyr sanity out info running only a subset info jobs info selecting all possible platforms per test case info building initial testcase list info test configurations selected configurations discarded due to filters info adding tasks to the queue info total complete skipped failed info of tests passed failed skipped with warnings in seconds info in total test cases were executed on out of total platforms info tests executed on platforms tests were only built this is due to how total skipped is calculated it needs to factor in subset when set
1
561,576
16,619,451,971
IssuesEvent
2021-06-02 21:35:14
awslabs/aws-saas-boost
https://api.github.com/repos/awslabs/aws-saas-boost
closed
Updated application deploy fails in environments with deleted tenants
bug priority-high tenant-management workload-deployment
In SaaS Boost environments where you've deleted an onboarded tenant and then subsequently upload a new version of the application workload to ECR, deployment of that new version does not happen. ### Reproduction Steps Install SaaS Boost Upload an application image to ECR Onboard **2** tenants Delete 1 of the tenants Update the application, rebuild the Docker image, and push to ECR ### What did you expect to happen? Updated application image should be deployed to the remaining non-deleted tenants ### What actually happened? The CodePipeline for the non-deleted tenants never gets triggered ### Environment - **AWS Region :** - **AWS SaaS Boost Version :** - **Workload OS (Linux or Windows) :** ### Other The getProvisionedTenants call in the Tenant Service does not exclude deleted tenants. When we delete tenants, we remove all of their infrastructure (including the CodePipline), but we retain a record in the tenant database with onboarding status == deleted. --- This is :bug: Bug Report
1.0
Updated application deploy fails in environments with deleted tenants - In SaaS Boost environments where you've deleted an onboarded tenant and then subsequently upload a new version of the application workload to ECR, deployment of that new version does not happen. ### Reproduction Steps Install SaaS Boost Upload an application image to ECR Onboard **2** tenants Delete 1 of the tenants Update the application, rebuild the Docker image, and push to ECR ### What did you expect to happen? Updated application image should be deployed to the remaining non-deleted tenants ### What actually happened? The CodePipeline for the non-deleted tenants never gets triggered ### Environment - **AWS Region :** - **AWS SaaS Boost Version :** - **Workload OS (Linux or Windows) :** ### Other The getProvisionedTenants call in the Tenant Service does not exclude deleted tenants. When we delete tenants, we remove all of their infrastructure (including the CodePipline), but we retain a record in the tenant database with onboarding status == deleted. --- This is :bug: Bug Report
priority
updated application deploy fails in environments with deleted tenants in saas boost environments where you ve deleted an onboarded tenant and then subsequently upload a new version of the application workload to ecr deployment of that new version does not happen reproduction steps install saas boost upload an application image to ecr onboard tenants delete of the tenants update the application rebuild the docker image and push to ecr what did you expect to happen updated application image should be deployed to the remaining non deleted tenants what actually happened the codepipeline for the non deleted tenants never gets triggered environment aws region aws saas boost version workload os linux or windows other the getprovisionedtenants call in the tenant service does not exclude deleted tenants when we delete tenants we remove all of their infrastructure including the codepipline but we retain a record in the tenant database with onboarding status deleted this is bug bug report
1
711,301
24,457,378,448
IssuesEvent
2022-10-07 08:04:16
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[DocDB][Packed Columns] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10
area/docdb priority/high status/awaiting-triage
### Description During an upgrade of my [puppy-food-arm-1 universe](https://portal.dev.yugabyte.com/universes/25bb1fc6-d74c-41ec-afa0-b0b331bdecc8/) from 2.15.4.0-b54 to 2.15.4.0-b72 the master server fails to come up after upgrade: ``` [yugabyte@ip logs]$ cat yb-master.FATAL.details.2022-10-07T07_13_16.pid3668968.txt F20221007 07:13:16 ../../src/yb/master/master_main.cc:136] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10 @ 0xffff7ca59d7c (unknown) @ 0xffff7ca54874 (unknown) @ 0xffff7ca551ac (unknown) @ 0xffff7ca57938 (unknown) @ 0x21429c main @ 0xffff7b6e0de4 __libc_start_main @ 0x213784 (unknown) ``` It keeps failing like this every minute when trying to start master again. The upgrade failure is also about this master server: ``` Failed to execute task {"sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"25bb1fc6-d74c-41ec-afa0-b0b331bdecc8","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"ybControllerrRpcPort":18018,"redisS..., hit error: WaitForServer(25bb1fc6-d74c-41ec-afa0-b0b331bdecc8, yb-15-puppy-food-arm-1-n1, type=MASTER) did not respond in the set time.. ``` This is with packed columns enabled on YSQL and YCQL, tserver and master. I will leave the universe in the current state for further analysis, tell me when I can destroy and recreate it.
1.0
[DocDB][Packed Columns] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10 - ### Description During an upgrade of my [puppy-food-arm-1 universe](https://portal.dev.yugabyte.com/universes/25bb1fc6-d74c-41ec-afa0-b0b331bdecc8/) from 2.15.4.0-b54 to 2.15.4.0-b72 the master server fails to come up after upgrade: ``` [yugabyte@ip logs]$ cat yb-master.FATAL.details.2022-10-07T07_13_16.pid3668968.txt F20221007 07:13:16 ../../src/yb/master/master_main.cc:136] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10 @ 0xffff7ca59d7c (unknown) @ 0xffff7ca54874 (unknown) @ 0xffff7ca551ac (unknown) @ 0xffff7ca57938 (unknown) @ 0x21429c main @ 0xffff7b6e0de4 __libc_start_main @ 0x213784 (unknown) ``` It keeps failing like this every minute when trying to start master again. The upgrade failure is also about this master server: ``` Failed to execute task {"sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"25bb1fc6-d74c-41ec-afa0-b0b331bdecc8","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"ybControllerrRpcPort":18018,"redisS..., hit error: WaitForServer(25bb1fc6-d74c-41ec-afa0-b0b331bdecc8, yb-15-puppy-food-arm-1-n1, type=MASTER) did not respond in the set time.. ``` This is with packed columns enabled on YSQL and YCQL, tserver and master. I will leave the universe in the current state for further analysis, tell me when I can destroy and recreate it.
priority
corruption yb master sys catalog writer cc unable to initialize catalog manager failed to initialize sys tables async failed log replay reason system catalog snapshot is corrupted or built using different build type found wrong metadata type vs description during an upgrade of my from to the master server fails to come up after upgrade cat yb master fatal details txt src yb master master main cc corruption yb master sys catalog writer cc unable to initialize catalog manager failed to initialize sys tables async failed log replay reason system catalog snapshot is corrupted or built using different build type found wrong metadata type vs unknown unknown unknown unknown main libc start main unknown it keeps failing like this every minute when trying to start master again the upgrade failure is also about this master server failed to execute task sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport ybcontrollerrrpcport rediss hit error waitforserver yb puppy food arm type master did not respond in the set time this is with packed columns enabled on ysql and ycql tserver and master i will leave the universe in the current state for further analysis tell me when i can destroy and recreate it
1
500,658
14,503,717,918
IssuesEvent
2020-12-11 23:17:24
mintproject/mint-ui-lit
https://api.github.com/repos/mintproject/mint-ui-lit
opened
New tab for selecting index/indicator when setting up a thread
feature_request high priority
When setting a new model run, we need to make sure not only that the models show and are updated, but that the indices used are consistent to what is shown in the model catalog. @hvarg is working towards a new table that includes the index/indicator, and also models and output variables of those models that can be used for selecting those indices. I am not sure this will be ready for next evaluation, but it's something we could try.
1.0
New tab for selecting index/indicator when setting up a thread - When setting a new model run, we need to make sure not only that the models show and are updated, but that the indices used are consistent to what is shown in the model catalog. @hvarg is working towards a new table that includes the index/indicator, and also models and output variables of those models that can be used for selecting those indices. I am not sure this will be ready for next evaluation, but it's something we could try.
priority
new tab for selecting index indicator when setting up a thread when setting a new model run we need to make sure not only that the models show and are updated but that the indices used are consistent to what is shown in the model catalog hvarg is working towards a new table that includes the index indicator and also models and output variables of those models that can be used for selecting those indices i am not sure this will be ready for next evaluation but it s something we could try
1
626,486
19,824,756,418
IssuesEvent
2022-01-20 04:18:40
inblockio/mediawiki-extensions-Aqua
https://api.github.com/repos/inblockio/mediawiki-extensions-Aqua
opened
Have all CI tests run and pass
correctness high priority Spicy-Question
To ensure we are delivering a working 'stable' enough prototype we need to ensure that our tests which are present in the code are executed and passing.
1.0
Have all CI tests run and pass - To ensure we are delivering a working 'stable' enough prototype we need to ensure that our tests which are present in the code are executed and passing.
priority
have all ci tests run and pass to ensure we are delivering a working stable enough prototype we need to ensure that our tests which are present in the code are executed and passing
1
645,917
21,032,771,424
IssuesEvent
2022-03-31 03:27:54
AY2122S2-CS2103-F11-2/tp
https://api.github.com/repos/AY2122S2-CS2103-F11-2/tp
opened
Adhoc Tasks
type.Bug priority.High Functionality
- [ ] Only alphabets or special characters for name e.g. `... s/o ...` - [ ] To look at candidate card bottom padding
1.0
Adhoc Tasks - - [ ] Only alphabets or special characters for name e.g. `... s/o ...` - [ ] To look at candidate card bottom padding
priority
adhoc tasks only alphabets or special characters for name e g s o to look at candidate card bottom padding
1
624,026
19,684,780,242
IssuesEvent
2022-01-11 20:45:03
LowellObservatory/Docker_LDT
https://api.github.com/repos/LowellObservatory/Docker_LDT
closed
Pay attention to python and subpackage versions
bug high priority
Just got bit by something that worked locally while testing (on bokeh 1.0.0) but failed in a novel way when I put it into testing (on bokeh 1.1.0). It wasn't obvious what was going on until I updated my local environment and ran into the same problem, which turned out to be some change in the ColumnDataSource structure/internal setup that gets used in all the bokeh plotting. I really, really, really should put in minimum versions for the important stuff to avoid this kind of thing.
1.0
Pay attention to python and subpackage versions - Just got bit by something that worked locally while testing (on bokeh 1.0.0) but failed in a novel way when I put it into testing (on bokeh 1.1.0). It wasn't obvious what was going on until I updated my local environment and ran into the same problem, which turned out to be some change in the ColumnDataSource structure/internal setup that gets used in all the bokeh plotting. I really, really, really should put in minimum versions for the important stuff to avoid this kind of thing.
priority
pay attention to python and subpackage versions just got bit by something that worked locally while testing on bokeh but failed in a novel way when i put it into testing on bokeh it wasn t obvious what was going on until i updated my local environment and ran into the same problem which turned out to be some change in the columndatasource structure internal setup that gets used in all the bokeh plotting i really really really should put in minimum versions for the important stuff to avoid this kind of thing
1
330,308
10,038,323,504
IssuesEvent
2019-07-18 14:56:04
2doubledoubleU/ASMR
https://api.github.com/repos/2doubledoubleU/ASMR
closed
Add ability to add feeds
high priority
All feeds are added via manual back-end SQL. Add a method in the GUI.
1.0
Add ability to add feeds - All feeds are added via manual back-end SQL. Add a method in the GUI.
priority
add ability to add feeds all feeds are added via manual back end sql add a method in the gui
1
407,131
11,906,839,934
IssuesEvent
2020-03-30 21:04:43
cucapra/futil
https://api.github.com/repos/cucapra/futil
opened
Run a single pass from the CLI
High priority
This will enable snapshot testing of individual passes. Prioritize this to get testing.
1.0
Run a single pass from the CLI - This will enable snapshot testing of individual passes. Prioritize this to get testing.
priority
run a single pass from the cli this will enable snapshot testing of individual passes prioritize this to get testing
1
782,120
27,487,190,471
IssuesEvent
2023-03-04 07:11:33
AY2223S2-CS2103T-T11-3/tp
https://api.github.com/repos/AY2223S2-CS2103T-T11-3/tp
closed
Update the DG: user stories, glossary, NFRs, use cases
type.Task priority.High
Add the following to the DG, based on your project notes from the previous weeks. - Target user profile, value proposition, and user stories: Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product. - Use cases: Give use cases (textual form) for a few representative user stories that need multiple steps to complete. e.g. Adding a tag to a person (assume the user needs to find the person first) - Non-functional requirements: Note: Many of the given project constraints can be considered NFRs. You can add more. e.g. performance requirements, usability requirements, scalability requirements, etc. - Glossary: Define terms that are worth recording.
1.0
Update the DG: user stories, glossary, NFRs, use cases - Add the following to the DG, based on your project notes from the previous weeks. - Target user profile, value proposition, and user stories: Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product. - Use cases: Give use cases (textual form) for a few representative user stories that need multiple steps to complete. e.g. Adding a tag to a person (assume the user needs to find the person first) - Non-functional requirements: Note: Many of the given project constraints can be considered NFRs. You can add more. e.g. performance requirements, usability requirements, scalability requirements, etc. - Glossary: Define terms that are worth recording.
priority
update the dg user stories glossary nfrs use cases add the following to the dg based on your project notes from the previous weeks target user profile value proposition and user stories update the target user profile and value proposition to match the project direction you have selected give a list of the user stories and update delete existing ones if applicable including priorities this can include user stories considered but will not be included in the final product use cases give use cases textual form for a few representative user stories that need multiple steps to complete e g adding a tag to a person assume the user needs to find the person first non functional requirements note many of the given project constraints can be considered nfrs you can add more e g performance requirements usability requirements scalability requirements etc glossary define terms that are worth recording
1
615,821
19,277,037,778
IssuesEvent
2021-12-10 13:06:53
merico-dev/lake
https://api.github.com/repos/merico-dev/lake
closed
Add Domain Layer Push api (Create only)
proposal priority: high
## Use Case 1. Push api makes it more flexible to write plugins 2. It provides a better user experience for async data outside our system. IE: Real time data ## Why do we want this? 1. There are companies that use their own software that we can't make a plugin for. If they want that data, they have to make it and its only for them. 2. Not every company has golang devs. They need to interface with a REST api for our domain layer. 3. The api we expose does not handle all cases. 4. Push apis allow 3rd party services to push data to us when the time is right. ## What should it do? 1. Exposes CRUD access to all models in the domain layer. 2. Let sql constraints validate the input. The api returns actual sql debugging information 3. This should be a service separate from lake. ## How secure is it? 1. Prevent some queries from breaking data? 2. Prevent security threats? ## Questions 1. Why do we maintain a plugin architecture AND a REST api? ANSWER: We need both 2. Do we want to maintain both? ANSWER: Yes ## Notes - This is simply a REST api and not a message queue for real time events - This should be written in golang
1.0
Add Domain Layer Push api (Create only) - ## Use Case 1. Push api makes it more flexible to write plugins 2. It provides a better user experience for async data outside our system. IE: Real time data ## Why do we want this? 1. There are companies that use their own software that we can't make a plugin for. If they want that data, they have to make it and its only for them. 2. Not every company has golang devs. They need to interface with a REST api for our domain layer. 3. The api we expose does not handle all cases. 4. Push apis allow 3rd party services to push data to us when the time is right. ## What should it do? 1. Exposes CRUD access to all models in the domain layer. 2. Let sql constraints validate the input. The api returns actual sql debugging information 3. This should be a service separate from lake. ## How secure is it? 1. Prevent some queries from breaking data? 2. Prevent security threats? ## Questions 1. Why do we maintain a plugin architecture AND a REST api? ANSWER: We need both 2. Do we want to maintain both? ANSWER: Yes ## Notes - This is simply a REST api and not a message queue for real time events - This should be written in golang
priority
add domain layer push api create only use case push api makes it more flexible to write plugins it provides a better user experience for async data outside our system ie real time data why do we want this there are companies that use their own software that we can t make a plugin for if they want that data they have to make it and its only for them not every company has golang devs they need to interface with a rest api for our domain layer the api we expose does not handle all cases push apis allow party services to push data to us when the time is right what should it do exposes crud access to all models in the domain layer let sql constraints validate the input the api returns actual sql debugging information this should be a service separate from lake how secure is it prevent some queries from breaking data prevent security threats questions why do we maintain a plugin architecture and a rest api answer we need both do we want to maintain both answer yes notes this is simply a rest api and not a message queue for real time events this should be written in golang
1
495,523
14,283,466,382
IssuesEvent
2020-11-23 11:03:12
CLOSER-Cohorts/archivist
https://api.github.com/repos/CLOSER-Cohorts/archivist
closed
Code lists not in doc view on staging server
High priority
There are differences between the staging and the live versions of Archivist e.g. datasets not loading #216, #312 All questions which have code lists attached seem to have lost this connection, although from the code list view they look like they are linked. I first noticed this on 15/16th October.
1.0
Code lists not in doc view on staging server - There are differences between the staging and the live versions of Archivist e.g. datasets not loading #216, #312 All questions which have code lists attached seem to have lost this connection, although from the code list view they look like they are linked. I first noticed this on 15/16th October.
priority
code lists not in doc view on staging server there are differences between the staging and the live versions of archivist e g datasets not loading all questions which have code lists attached seem to have lost this connection although from the code list view they look like they are linked i first noticed this on october
1
766,766
26,897,944,694
IssuesEvent
2023-02-06 13:48:06
ut-issl/c2a-tlm-cmd-code-generator
https://api.github.com/repos/ut-issl/c2a-tlm-cmd-code-generator
opened
u16以上のテレメ圧縮対応
bug priority::high
## 概要 u16以上のテレメ圧縮対応 ## 詳細 - https://github.com/ut-issl/wings/pull/14 に対応させる - テレメ圧縮の判定を,セル結合,ではなく,型サイズがbitlenと異なることで判定させる(=WINGSと同じ) - その場合,bitpos を 8 以上を認めるかどうか?などの処理を追加 ## close条件 対応したら
1.0
u16以上のテレメ圧縮対応 - ## 概要 u16以上のテレメ圧縮対応 ## 詳細 - https://github.com/ut-issl/wings/pull/14 に対応させる - テレメ圧縮の判定を,セル結合,ではなく,型サイズがbitlenと異なることで判定させる(=WINGSと同じ) - その場合,bitpos を 8 以上を認めるかどうか?などの処理を追加 ## close条件 対応したら
priority
概要 詳細 に対応させる テレメ圧縮の判定を,セル結合,ではなく,型サイズがbitlenと異なることで判定させる(=wingsと同じ) その場合,bitpos を 以上を認めるかどうか?などの処理を追加 close条件 対応したら
1
561,529
16,618,835,420
IssuesEvent
2021-06-02 20:38:49
turbot/steampipe-plugin-aws
https://api.github.com/repos/turbot/steampipe-plugin-aws
closed
Getting `404 error` when elasticache cluster is in `creating` state.
bug priority:high
**Describe the bug** Getting `Error: CacheClusterNotFound: test-redis-cluster-0003-002 is either not present or not available. status code: 404, request id: 21eacb5c-56f0-4bce-b4ea-f9f7e360df1b` when elasticache cluster is in `creating` state. and trying execute below query ``` select * from aws_elasticache_cluster ``` **Steampipe version (`steampipe -v`)** : v0.5.1 **Plugin version (`steampipe plugin list`)** AWS: v0.18.0 **To reproduce** Run the above query when cluster is in `Creating` state. **Expected behavior** Shouldn't get any error. **Additional context**
1.0
Getting `404 error` when elasticache cluster is in `creating` state. - **Describe the bug** Getting `Error: CacheClusterNotFound: test-redis-cluster-0003-002 is either not present or not available. status code: 404, request id: 21eacb5c-56f0-4bce-b4ea-f9f7e360df1b` when elasticache cluster is in `creating` state. and trying execute below query ``` select * from aws_elasticache_cluster ``` **Steampipe version (`steampipe -v`)** : v0.5.1 **Plugin version (`steampipe plugin list`)** AWS: v0.18.0 **To reproduce** Run the above query when cluster is in `Creating` state. **Expected behavior** Shouldn't get any error. **Additional context**
priority
getting error when elasticache cluster is in creating state describe the bug getting error cacheclusternotfound test redis cluster is either not present or not available status code request id when elasticache cluster is in creating state and trying execute below query select from aws elasticache cluster steampipe version steampipe v plugin version steampipe plugin list aws to reproduce run the above query when cluster is in creating state expected behavior shouldn t get any error additional context
1
50,795
3,006,622,455
IssuesEvent
2015-07-27 11:44:03
Itseez/opencv
https://api.github.com/repos/Itseez/opencv
opened
Build Failure on Mac OS X (GLX support)
affected: master auto-transferred bug category: highgui-gui priority: normal
Transferred from http://code.opencv.org/issues/4495 ``` || Hendi Saleh on 2015-07-22 03:56 || Priority: Normal || Affected: branch 'master' (3.0-dev) || Category: highgui-gui || Tracker: Bug || Difficulty: || PR: || Platform: x64 / Windows ``` Build Failure on Mac OS X (GLX support) ----------- ``` On Mac OS X 10.6.8 configured with OpenGL, build stops at modules/highgui/src/window_QT.cpp:3364 with a not-declared error of a GLX function. <pre> <builddir>/opencv/modules/highgui/src/window_QT.cpp: In member function ‘virtual void GlFuncTab_QT::generateBitmapFont(const std::string&, int, int, bool, bool, int, int, int) const’: <builddir>/opencv/modules/highgui/src/window_QT.cpp:3364: error: ‘glXUseXFont’ was not declared in this scope <builddir>/opencv/modules/highgui/src/window_QT.cpp:3355: warning: unused variable ‘cvFuncName’ make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_QT.cpp.o] Error 1 make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2 make: *** [all] Error 2 </pre> This is, to the best of my knowledge, the only usage of a GLX function when building with Mac OS X. It isn't appropriate to use GLX on Mac OS X because Apple uses AGL's aglUseFont for the same purpose. This error may be related to improper @#ifdef@s: At the top of window_QT.cpp, @#ifdef Q_WS_X11@ is used to guard the inclusion of GL/glx.h. However, at line 3364, a *#ifndef Q_WS_WIN* is used to guard the use of glXUseXFont. This allows a non-Windows machine possessing X11 but lacking GLX (i.e. Mac OS X) to reach the faulty call. ``` History -------
1.0
Build Failure on Mac OS X (GLX support) - Transferred from http://code.opencv.org/issues/4495 ``` || Hendi Saleh on 2015-07-22 03:56 || Priority: Normal || Affected: branch 'master' (3.0-dev) || Category: highgui-gui || Tracker: Bug || Difficulty: || PR: || Platform: x64 / Windows ``` Build Failure on Mac OS X (GLX support) ----------- ``` On Mac OS X 10.6.8 configured with OpenGL, build stops at modules/highgui/src/window_QT.cpp:3364 with a not-declared error of a GLX function. <pre> <builddir>/opencv/modules/highgui/src/window_QT.cpp: In member function ‘virtual void GlFuncTab_QT::generateBitmapFont(const std::string&, int, int, bool, bool, int, int, int) const’: <builddir>/opencv/modules/highgui/src/window_QT.cpp:3364: error: ‘glXUseXFont’ was not declared in this scope <builddir>/opencv/modules/highgui/src/window_QT.cpp:3355: warning: unused variable ‘cvFuncName’ make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_QT.cpp.o] Error 1 make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2 make: *** [all] Error 2 </pre> This is, to the best of my knowledge, the only usage of a GLX function when building with Mac OS X. It isn't appropriate to use GLX on Mac OS X because Apple uses AGL's aglUseFont for the same purpose. This error may be related to improper @#ifdef@s: At the top of window_QT.cpp, @#ifdef Q_WS_X11@ is used to guard the inclusion of GL/glx.h. However, at line 3364, a *#ifndef Q_WS_WIN* is used to guard the use of glXUseXFont. This allows a non-Windows machine possessing X11 but lacking GLX (i.e. Mac OS X) to reach the faulty call. ``` History -------
priority
build failure on mac os x glx support transferred from hendi saleh on priority normal affected branch master dev category highgui gui tracker bug difficulty pr platform windows build failure on mac os x glx support on mac os x configured with opengl build stops at modules highgui src window qt cpp with a not declared error of a glx function opencv modules highgui src window qt cpp in member function ‘virtual void glfunctab qt generatebitmapfont const std string int int bool bool int int int const’ opencv modules highgui src window qt cpp error ‘glxusexfont’ was not declared in this scope opencv modules highgui src window qt cpp warning unused variable ‘cvfuncname’ make error make error make error this is to the best of my knowledge the only usage of a glx function when building with mac os x it isn t appropriate to use glx on mac os x because apple uses agl s aglusefont for the same purpose this error may be related to improper ifdef s at the top of window qt cpp ifdef q ws is used to guard the inclusion of gl glx h however at line a ifndef q ws win is used to guard the use of glxusexfont this allows a non windows machine possessing but lacking glx i e mac os x to reach the faulty call history
1
315,200
9,607,673,257
IssuesEvent
2019-05-11 21:08:01
prysmaticlabs/prysm
https://api.github.com/repos/prysmaticlabs/prysm
closed
beacon - panic: runtime error: index out of range
Bug Priority: High
beacon-chain crash on docker. ``` panic: runtime error: index out of range goroutine 1329 [running]: github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).saveValidatorIdx(0xc0007ac300, 0xc0001a6e00, 0xc002d7a998, 0x1) beacon-chain/blockchain/block_processing.go:323 +0x220 github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).runStateTransition(0xc0007ac300, 0x16186e0, 0xc0000820c0, 0x8f4eaf734d5f4844, 0x5a406f5c9ebac0a5, 0x7cebd57a54902030, 0x82b8c587855f4ea4, 0x0, 0xc0001a6e00, 0xc0001a6e00, ...) beacon-chain/blockchain/block_processing.go:299 +0x341 github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).ApplyBlockStateTransition(0xc0007ac300, 0x16186e0, 0xc0000820c0, 0xc0018a6320, 0xc0001a6e00, 0x0, 0x0, 0x82b8c587855f4ea4) beacon-chain/blockchain/block_processing.go:169 +0x1dc github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).exitInitialSync(0xc0005caaa0, 0x16186e0, 0xc0000820c0, 0xc0018a6320, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/service.go:183 +0x51b github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).processBlock(0xc0005caaa0, 0x16187a0, 0xc00292c1e0, 0xc0018a6320, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/sync_blocks.go:28 +0x111 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).processBatchedBlocks(0xc0005caaa0, 0x16187a0, 0xc001802660, 0xc001b24780, 0x22, 0x1610d60, 0xc001a6fb00, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/sync_blocks.go:63 +0x361 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).syncToPeer(0xc0005caaa0, 0x1618760, 0xc00153efc0, 0xc000f1ae10, 0xc001428c60, 0x22, 0x0, 0x0) beacon-chain/sync/initial-sync/service.go:291 +0x6c7 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).run(0xc0005caaa0, 0xc00049fe90) beacon-chain/sync/initial-sync/service.go:249 +0x3f4 created by github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).Start beacon-chain/sync/initial-sync/service.go:138 +0x49 ```
1.0
beacon - panic: runtime error: index out of range - beacon-chain crash on docker. ``` panic: runtime error: index out of range goroutine 1329 [running]: github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).saveValidatorIdx(0xc0007ac300, 0xc0001a6e00, 0xc002d7a998, 0x1) beacon-chain/blockchain/block_processing.go:323 +0x220 github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).runStateTransition(0xc0007ac300, 0x16186e0, 0xc0000820c0, 0x8f4eaf734d5f4844, 0x5a406f5c9ebac0a5, 0x7cebd57a54902030, 0x82b8c587855f4ea4, 0x0, 0xc0001a6e00, 0xc0001a6e00, ...) beacon-chain/blockchain/block_processing.go:299 +0x341 github.com/prysmaticlabs/prysm/beacon-chain/blockchain.(*ChainService).ApplyBlockStateTransition(0xc0007ac300, 0x16186e0, 0xc0000820c0, 0xc0018a6320, 0xc0001a6e00, 0x0, 0x0, 0x82b8c587855f4ea4) beacon-chain/blockchain/block_processing.go:169 +0x1dc github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).exitInitialSync(0xc0005caaa0, 0x16186e0, 0xc0000820c0, 0xc0018a6320, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/service.go:183 +0x51b github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).processBlock(0xc0005caaa0, 0x16187a0, 0xc00292c1e0, 0xc0018a6320, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/sync_blocks.go:28 +0x111 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).processBatchedBlocks(0xc0005caaa0, 0x16187a0, 0xc001802660, 0xc001b24780, 0x22, 0x1610d60, 0xc001a6fb00, 0xc000f1ae10, 0x0, 0x0) beacon-chain/sync/initial-sync/sync_blocks.go:63 +0x361 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).syncToPeer(0xc0005caaa0, 0x1618760, 0xc00153efc0, 0xc000f1ae10, 0xc001428c60, 0x22, 0x0, 0x0) beacon-chain/sync/initial-sync/service.go:291 +0x6c7 github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).run(0xc0005caaa0, 0xc00049fe90) beacon-chain/sync/initial-sync/service.go:249 +0x3f4 created by github.com/prysmaticlabs/prysm/beacon-chain/sync/initial-sync.(*InitialSync).Start beacon-chain/sync/initial-sync/service.go:138 +0x49 ```
priority
beacon panic runtime error index out of range beacon chain crash on docker panic runtime error index out of range goroutine github com prysmaticlabs prysm beacon chain blockchain chainservice savevalidatoridx beacon chain blockchain block processing go github com prysmaticlabs prysm beacon chain blockchain chainservice runstatetransition beacon chain blockchain block processing go github com prysmaticlabs prysm beacon chain blockchain chainservice applyblockstatetransition beacon chain blockchain block processing go github com prysmaticlabs prysm beacon chain sync initial sync initialsync exitinitialsync beacon chain sync initial sync service go github com prysmaticlabs prysm beacon chain sync initial sync initialsync processblock beacon chain sync initial sync sync blocks go github com prysmaticlabs prysm beacon chain sync initial sync initialsync processbatchedblocks beacon chain sync initial sync sync blocks go github com prysmaticlabs prysm beacon chain sync initial sync initialsync synctopeer beacon chain sync initial sync service go github com prysmaticlabs prysm beacon chain sync initial sync initialsync run beacon chain sync initial sync service go created by github com prysmaticlabs prysm beacon chain sync initial sync initialsync start beacon chain sync initial sync service go
1
214,368
7,269,304,565
IssuesEvent
2018-02-20 13:16:25
metasfresh/metasfresh-webui-api
https://api.github.com/repos/metasfresh/metasfresh-webui-api
closed
Automatic group creation in sales order lines
branch:master branch:release priority:high
### Is this a bug or feature request? Feature Request ### What is the current behavior? Currently, we have the action to create sales orderline groups in webui. When creating, the user must decide which groups shall be created and how they shall be called. #### Which are the steps to reproduce? Open, try and see. ### What is the expected or desired behavior? The user shall be able to select all orderlines (select all button?) and then start a new action "automatic grouping". The groups are then created automatically per product >> product category >> sales group_id. The Product used in the grouping lines shall always be "Sum".
1.0
Automatic group creation in sales order lines - ### Is this a bug or feature request? Feature Request ### What is the current behavior? Currently, we have the action to create sales orderline groups in webui. When creating, the user must decide which groups shall be created and how they shall be called. #### Which are the steps to reproduce? Open, try and see. ### What is the expected or desired behavior? The user shall be able to select all orderlines (select all button?) and then start a new action "automatic grouping". The groups are then created automatically per product >> product category >> sales group_id. The Product used in the grouping lines shall always be "Sum".
priority
automatic group creation in sales order lines is this a bug or feature request feature request what is the current behavior currently we have the action to create sales orderline groups in webui when creating the user must decide which groups shall be created and how they shall be called which are the steps to reproduce open try and see what is the expected or desired behavior the user shall be able to select all orderlines select all button and then start a new action automatic grouping the groups are then created automatically per product product category sales group id the product used in the grouping lines shall always be sum
1
786,239
27,640,065,290
IssuesEvent
2023-03-10 17:17:08
Ellivers/WorldTool
https://api.github.com/repos/Ellivers/WorldTool
closed
Clone with Template mode places certain rotated areas in the wrong place
bug priority: high
This one's most likely going to be painful to fix. Try rebuilding what the code is supposed to do, while looking at the current code as a reference.
1.0
Clone with Template mode places certain rotated areas in the wrong place - This one's most likely going to be painful to fix. Try rebuilding what the code is supposed to do, while looking at the current code as a reference.
priority
clone with template mode places certain rotated areas in the wrong place this one s most likely going to be painful to fix try rebuilding what the code is supposed to do while looking at the current code as a reference
1
470,450
13,537,861,730
IssuesEvent
2020-09-16 11:11:12
wso2/product-is
https://api.github.com/repos/wso2/product-is
opened
Getting Null point Exception when adding claim dialects using the configuration file
Priority/High Severity/Major bug
**Describe the issue:** ```ERROR {org.wso2.carbon.user.core.claim.DefaultClaimManager} - Error while initializing claim manager [2020-09-16 16:17:06,546] [] ERROR {org.wso2.carbon.user.core.internal.Activator} - Cannot start User Manager Core bundle org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm. at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:286) at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102) at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115) at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72) at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61) at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842) at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1) at java.base/java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834) at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791) at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013) at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365) at org.eclipse.osgi.container.Module.doStart(Module.java:598) at org.eclipse.osgi.container.Module.start(Module.java:462) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820) at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345) Caused by: java.lang.NullPointerException at org.wso2.carbon.user.core.claim.inmemory.InMemoryClaimManager.<init>(InMemoryClaimManager.java:67) at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:130) at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:276) ... 22 more ``` **How to reproduce:** 1. Open the claim-config.xml file found in the <IS_HOME>/repository/conf/ folder. To add a new claim dialect, add the following configuration to the file along with the new claims you want to add under the dialect. example, the new claim dialect is named SampleAppClaims . ```<Dialect dialectURI="http://wso2.org/SampleAppClaims"> <Claim> <ClaimURI>http://wso2.org/SampleAppClaims/givenname</ClaimURI> <DisplayName>First Name</DisplayName> <MappedLocalClaim>http://wso2.org/claims/givenname</MappedLocalClaim> </Claim> <Claim> <ClaimURI>http://wso2.org/SampleAppClaims/nickName</ClaimURI> <DisplayName>Nick Name</DisplayName> <MappedLocalClaim>http://wso2.org/claims/nickname</MappedLocalClaim> </Claim> </Dialect> ``` 3. Once you have edited the claim-config. xml file, start WSO2 Identity Server. NOTE: did this before the first start-up of the IS product. **Expected behavior:** The configurations should be applied and should able to view the new claim dialect via the console > Manage. **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** - Product Version: 5.11 m36 snapshot - OS: mac - Database: H2 - Userstore:Ldap --- Refer : https://is.docs.wso2.com/en/5.11.0/learn/adding-claim-dialects/
1.0
Getting Null point Exception when adding claim dialects using the configuration file - **Describe the issue:** ```ERROR {org.wso2.carbon.user.core.claim.DefaultClaimManager} - Error while initializing claim manager [2020-09-16 16:17:06,546] [] ERROR {org.wso2.carbon.user.core.internal.Activator} - Cannot start User Manager Core bundle org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm. at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:286) at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102) at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115) at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72) at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61) at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842) at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1) at java.base/java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834) at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791) at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013) at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365) at org.eclipse.osgi.container.Module.doStart(Module.java:598) at org.eclipse.osgi.container.Module.start(Module.java:462) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820) at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345) Caused by: java.lang.NullPointerException at org.wso2.carbon.user.core.claim.inmemory.InMemoryClaimManager.<init>(InMemoryClaimManager.java:67) at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:130) at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:276) ... 22 more ``` **How to reproduce:** 1. Open the claim-config.xml file found in the <IS_HOME>/repository/conf/ folder. To add a new claim dialect, add the following configuration to the file along with the new claims you want to add under the dialect. example, the new claim dialect is named SampleAppClaims . ```<Dialect dialectURI="http://wso2.org/SampleAppClaims"> <Claim> <ClaimURI>http://wso2.org/SampleAppClaims/givenname</ClaimURI> <DisplayName>First Name</DisplayName> <MappedLocalClaim>http://wso2.org/claims/givenname</MappedLocalClaim> </Claim> <Claim> <ClaimURI>http://wso2.org/SampleAppClaims/nickName</ClaimURI> <DisplayName>Nick Name</DisplayName> <MappedLocalClaim>http://wso2.org/claims/nickname</MappedLocalClaim> </Claim> </Dialect> ``` 3. Once you have edited the claim-config. xml file, start WSO2 Identity Server. NOTE: did this before the first start-up of the IS product. **Expected behavior:** The configurations should be applied and should able to view the new claim dialect via the console > Manage. **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** - Product Version: 5.11 m36 snapshot - OS: mac - Database: H2 - Userstore:Ldap --- Refer : https://is.docs.wso2.com/en/5.11.0/learn/adding-claim-dialects/
priority
getting null point exception when adding claim dialects using the configuration file describe the issue error org carbon user core claim defaultclaimmanager error while initializing claim manager error org carbon user core internal activator cannot start user manager core bundle org carbon user core userstoreexception cannot initialize the realm at org carbon user core common defaultrealmservice initializerealm defaultrealmservice java at org carbon user core common defaultrealmservice defaultrealmservice java at org carbon user core common defaultrealmservice defaultrealmservice java at org carbon user core internal activator startdeploy activator java at org carbon user core internal bundlecheckactivator start bundlecheckactivator java at org eclipse osgi internal framework bundlecontextimpl run bundlecontextimpl java at org eclipse osgi internal framework bundlecontextimpl run bundlecontextimpl java at java base java security accesscontroller doprivileged native method at org eclipse osgi internal framework bundlecontextimpl startactivator bundlecontextimpl java at org eclipse osgi internal framework bundlecontextimpl start bundlecontextimpl java at org eclipse osgi internal framework equinoxbundle equinoxbundle java at org eclipse osgi internal framework equinoxbundle equinoxmodule startworker equinoxbundle java at org eclipse osgi container module dostart module java at org eclipse osgi container module start module java at org eclipse osgi container modulecontainer containerstartlevel run modulecontainer java at org eclipse osgi internal framework equinoxcontaineradaptor execute equinoxcontaineradaptor java at org eclipse osgi container modulecontainer containerstartlevel incstartlevel modulecontainer java at org eclipse osgi container modulecontainer containerstartlevel incstartlevel modulecontainer java at org eclipse osgi container modulecontainer containerstartlevel docontainerstartlevel modulecontainer java at org eclipse osgi container modulecontainer containerstartlevel dispatchevent modulecontainer java at org eclipse osgi container modulecontainer containerstartlevel dispatchevent modulecontainer java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr eventmanager eventthread run eventmanager java caused by java lang nullpointerexception at org carbon user core claim inmemory inmemoryclaimmanager inmemoryclaimmanager java at org carbon user core common defaultrealm init defaultrealm java at org carbon user core common defaultrealmservice initializerealm defaultrealmservice java more how to reproduce open the claim config xml file found in the repository conf folder to add a new claim dialect add the following configuration to the file along with the new claims you want to add under the dialect example the new claim dialect is named sampleappclaims dialect dialecturi first name nick name once you have edited the claim config xml file start identity server note did this before the first start up of the is product expected behavior the configurations should be applied and should able to view the new claim dialect via the console manage environment information please complete the following information remove any unnecessary fields product version snapshot os mac database userstore ldap refer
1
214,073
7,263,183,151
IssuesEvent
2018-02-19 09:55:03
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
[IS-AS-KM] APIM version should be corrected on "Configure IS as KM" section
2.2.0 Priority/High Type/Bug Type/Docs
**Description:** Section [1] mentions the APIM version is as 2.1.0, ideally it should be 2xx or 2.1.0-update(until 2.2.0 is released in future) otherwise there is no point of having this section under 2xx. [1] https://docs.wso2.com/display/AM2xx/Configuring+WSO2+Identity+Server+as+a+Key+Manager **Affected Product Version:** 2.1.0-update(2xx)
1.0
[IS-AS-KM] APIM version should be corrected on "Configure IS as KM" section - **Description:** Section [1] mentions the APIM version is as 2.1.0, ideally it should be 2xx or 2.1.0-update(until 2.2.0 is released in future) otherwise there is no point of having this section under 2xx. [1] https://docs.wso2.com/display/AM2xx/Configuring+WSO2+Identity+Server+as+a+Key+Manager **Affected Product Version:** 2.1.0-update(2xx)
priority
apim version should be corrected on configure is as km section description section mentions the apim version is as ideally it should be or update until is released in future otherwise there is no point of having this section under affected product version update
1
84,672
3,670,727,541
IssuesEvent
2016-02-22 00:41:51
cs2103jan2016-f14-4j/main
https://api.github.com/repos/cs2103jan2016-f14-4j/main
closed
Storage support for reading task objects from file
priority.high type.task
Logic calls `Task[] Storage.getTaskObjects()` #9
1.0
Storage support for reading task objects from file - Logic calls `Task[] Storage.getTaskObjects()` #9
priority
storage support for reading task objects from file logic calls task storage gettaskobjects
1
489,667
14,109,519,169
IssuesEvent
2020-11-06 19:44:12
zeebe-io/zeebe
https://api.github.com/repos/zeebe-io/zeebe
closed
Deployment Reprocessing Inconsistency with DeploymentCache
Impact: Availability Impact: Data Priority: High Scope: broker Severity: Critical Status: Ready Type: Bug
After looking into `DeploymentCreateProcessor` and `WorkflowPersistenceCache`, we can say that the processor is definitely a problem for the reprocessing detection. Depending on the in-memory state of the cache, the `CREATE` command is accepted or rejected. So, it depends on which records the processor has processed before (independently from the actual records on the log stream). On reprocessing, if the processor reads more or less `CREATE` commands as the processor that wrote the follow-up event (`CREATED` / rejection) then it produces a different follow-up event and it is detected as reprocessing issue. _Originally posted by @saig0 in https://github.com/zeebe-io/zeebe/issues/5688#issuecomment-721021464_
1.0
Deployment Reprocessing Inconsistency with DeploymentCache - After looking into `DeploymentCreateProcessor` and `WorkflowPersistenceCache`, we can say that the processor is definitely a problem for the reprocessing detection. Depending on the in-memory state of the cache, the `CREATE` command is accepted or rejected. So, it depends on which records the processor has processed before (independently from the actual records on the log stream). On reprocessing, if the processor reads more or less `CREATE` commands as the processor that wrote the follow-up event (`CREATED` / rejection) then it produces a different follow-up event and it is detected as reprocessing issue. _Originally posted by @saig0 in https://github.com/zeebe-io/zeebe/issues/5688#issuecomment-721021464_
priority
deployment reprocessing inconsistency with deploymentcache after looking into deploymentcreateprocessor and workflowpersistencecache we can say that the processor is definitely a problem for the reprocessing detection depending on the in memory state of the cache the create command is accepted or rejected so it depends on which records the processor has processed before independently from the actual records on the log stream on reprocessing if the processor reads more or less create commands as the processor that wrote the follow up event created rejection then it produces a different follow up event and it is detected as reprocessing issue originally posted by in
1
276,278
8,596,599,482
IssuesEvent
2018-11-15 16:22:04
strapi/strapi
https://api.github.com/repos/strapi/strapi
closed
Graphql data includes all relational objects when no relation is selected
priority: high status: confirmed type: bug 🐛
<!-- ⚠️ If you do not respect this template your issue will be closed. --> <!-- =============================================================================== --> <!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. --> <!-- Please see the wiki for guides on upgrading to the latest release. --> <!-- =============================================================================== --> <!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. --> <!-- ⚠️ Before writing your issue make sure you are using:--> <!-- Node 10.x.x --> <!-- npm 6.x.x --> <!-- The latest version of Strapi. --> **Informations** - **Node.js version**: v10.0.0 - **npm version**: 6.0.1 - **Strapi version**: 3.0.0-alpha.14.3 - **Database**: MySQL 5.7.24 - **Operating system**: macOS 10.14 (Mojave) **What is the current behavior?** Having a many-to-many relation, graphql shows the list of all foreign elements of a relation when none is selected in an element. **Steps to reproduce the problem** 1. fresh strapi install + graphql 2. create content types: - song with title (string) - artist with name (string) 3. add relation between song and artist (many to many) 4. add some artists (A1, A2, A3...) 5. add some songs (S1, S2, S3...) 6. add in S1: A1 7. add in S2: A2, A3 8. request in graphql: `query { songs { title artists { name } } }` and get the result: `{ "data": { "songs": [ { "title": "S1", "artists": [ { "name": "A1" } ] }, { "title": "S2", "artists": [] }, { "title": "S3", "artists": [ { "name": "A1" }, { "name": "A2" }, { "name": "A3" }, { "name": "A4" } ] } ] } }` Here you can see, song "S3" shows the complete list of artists, although S3 has no artists. Optional: Verify REST call (correct result): Visit http://localhost:1337/songs and get: `[{"id":1,"title":"S1","created_at":"2018-10-24T11:13:15.000Z","updated_at":"2018-10-24T11:13:46.000Z","artists":[{"id":1,"name":"A1","created_at":"2018-10-24T11:13:26.000Z","updated_at":"2018-10-24T11:13:26.000Z"}]},{"id":2,"title":"S2","created_at":"2018-10-24T11:13:18.000Z","updated_at":"2018-10-24T11:23:50.000Z","artists":[{"id":3,"name":"A3","created_at":"2018-10-24T11:13:30.000Z","updated_at":"2018-10-24T11:13:30.000Z"},{"id":2,"name":"A2","created_at":"2018-10-24T11:13:28.000Z","updated_at":"2018-10-24T11:13:28.000Z"}]},{"id":3,"title":"S3","created_at":"2018-10-24T11:13:20.000Z","updated_at":"2018-10-24T11:43:22.000Z","artists":[]}]` Here you can see: the artists array is empty for S3. **What is the expected behavior?** Empty list/array when no objects are related. **Suggested solutions** Fix code for graphql ;)
1.0
Graphql data includes all relational objects when no relation is selected - <!-- ⚠️ If you do not respect this template your issue will be closed. --> <!-- =============================================================================== --> <!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. --> <!-- Please see the wiki for guides on upgrading to the latest release. --> <!-- =============================================================================== --> <!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. --> <!-- ⚠️ Before writing your issue make sure you are using:--> <!-- Node 10.x.x --> <!-- npm 6.x.x --> <!-- The latest version of Strapi. --> **Informations** - **Node.js version**: v10.0.0 - **npm version**: 6.0.1 - **Strapi version**: 3.0.0-alpha.14.3 - **Database**: MySQL 5.7.24 - **Operating system**: macOS 10.14 (Mojave) **What is the current behavior?** Having a many-to-many relation, graphql shows the list of all foreign elements of a relation when none is selected in an element. **Steps to reproduce the problem** 1. fresh strapi install + graphql 2. create content types: - song with title (string) - artist with name (string) 3. add relation between song and artist (many to many) 4. add some artists (A1, A2, A3...) 5. add some songs (S1, S2, S3...) 6. add in S1: A1 7. add in S2: A2, A3 8. request in graphql: `query { songs { title artists { name } } }` and get the result: `{ "data": { "songs": [ { "title": "S1", "artists": [ { "name": "A1" } ] }, { "title": "S2", "artists": [] }, { "title": "S3", "artists": [ { "name": "A1" }, { "name": "A2" }, { "name": "A3" }, { "name": "A4" } ] } ] } }` Here you can see, song "S3" shows the complete list of artists, although S3 has no artists. Optional: Verify REST call (correct result): Visit http://localhost:1337/songs and get: `[{"id":1,"title":"S1","created_at":"2018-10-24T11:13:15.000Z","updated_at":"2018-10-24T11:13:46.000Z","artists":[{"id":1,"name":"A1","created_at":"2018-10-24T11:13:26.000Z","updated_at":"2018-10-24T11:13:26.000Z"}]},{"id":2,"title":"S2","created_at":"2018-10-24T11:13:18.000Z","updated_at":"2018-10-24T11:23:50.000Z","artists":[{"id":3,"name":"A3","created_at":"2018-10-24T11:13:30.000Z","updated_at":"2018-10-24T11:13:30.000Z"},{"id":2,"name":"A2","created_at":"2018-10-24T11:13:28.000Z","updated_at":"2018-10-24T11:13:28.000Z"}]},{"id":3,"title":"S3","created_at":"2018-10-24T11:13:20.000Z","updated_at":"2018-10-24T11:43:22.000Z","artists":[]}]` Here you can see: the artists array is empty for S3. **What is the expected behavior?** Empty list/array when no objects are related. **Suggested solutions** Fix code for graphql ;)
priority
graphql data includes all relational objects when no relation is selected informations node js version npm version strapi version alpha database mysql operating system macos mojave what is the current behavior having a many to many relation graphql shows the list of all foreign elements of a relation when none is selected in an element steps to reproduce the problem fresh strapi install graphql create content types song with title string artist with name string add relation between song and artist many to many add some artists add some songs add in add in request in graphql query songs title artists name and get the result data songs title artists name title artists title artists name name name name here you can see song shows the complete list of artists although has no artists optional verify rest call correct result visit and get id title created at updated at artists id title created at updated at artists here you can see the artists array is empty for what is the expected behavior empty list array when no objects are related suggested solutions fix code for graphql
1
158,035
6,020,705,736
IssuesEvent
2017-06-07 17:01:52
timtrice/rrricanes
https://api.github.com/repos/timtrice/rrricanes
closed
broom::tidy Error: isTRUE(gpclibPermitStatus()) is not TRUE
Bugs High Priority
### Error Message ``` Error: isTRUE(gpclibPermitStatus()) is not TRUE ``` ### Reproducible Example ```r adv <- gis_advisory(key = "AL182012", advisory = "18") %>% gis_download() fcst <- adv$al182012_018_5day_pgn fcst@data$id <- rownames(fcst@data) fcst.points = broom::tidy(fcst, region = "id") ``` ### Traceback ``` 5: stop(sprintf(ngettext(length(r), "%s is not TRUE", "%s are not all TRUE"), ch), call. = FALSE, domain = NA) 4: stopifnot(isTRUE(gpclibPermitStatus())) 3: maptools::unionSpatialPolygons(cp, attr[, region]) 2: tidy.SpatialPolygonsDataFrame(fcst, region = "id") 1: broom::tidy(fcst, region = "id") ``` ### Session Info ``` R version 3.3.3 (2017-03-06) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Debian GNU/Linux 8 (jessie) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=C LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] maptools_0.9-2 sp_1.2-4 rgeos_0.3-23 rrricanes_0.2.0 ggplot2_2.2.1 dplyr_0.5.0 loaded via a namespace (and not attached): [1] Rcpp_0.12.10 git2r_0.18.0 plyr_1.8.4 tools_3.3.3 digest_0.6.12 [6] memoise_1.0.0 tibble_1.3.0 gtable_0.2.0 nlme_3.1-131 lattice_0.20-35 [11] psych_1.7.3.21 DBI_0.6-1 curl_2.5 rgdal_1.2-6 parallel_3.3.3 [16] rnaturalearthdata_0.1.0 withr_1.0.2 httr_1.2.1 stringr_1.2.0 devtools_1.12.0 [21] hms_0.3 grid_3.3.3 data.table_1.10.4 R6_2.2.0 foreign_0.8-67 [26] tidyr_0.6.1 readr_1.1.0 purrr_0.2.2 reshape2_1.4.2 magrittr_1.5 [31] scales_0.4.1 assertthat_0.2.0 mnormt_1.5-5 colorspace_1.3-2 labeling_0.3 [36] stringi_1.1.5 lazyeval_0.2.0 munsell_0.4.3 broom_0.4.2 ```
1.0
broom::tidy Error: isTRUE(gpclibPermitStatus()) is not TRUE - ### Error Message ``` Error: isTRUE(gpclibPermitStatus()) is not TRUE ``` ### Reproducible Example ```r adv <- gis_advisory(key = "AL182012", advisory = "18") %>% gis_download() fcst <- adv$al182012_018_5day_pgn fcst@data$id <- rownames(fcst@data) fcst.points = broom::tidy(fcst, region = "id") ``` ### Traceback ``` 5: stop(sprintf(ngettext(length(r), "%s is not TRUE", "%s are not all TRUE"), ch), call. = FALSE, domain = NA) 4: stopifnot(isTRUE(gpclibPermitStatus())) 3: maptools::unionSpatialPolygons(cp, attr[, region]) 2: tidy.SpatialPolygonsDataFrame(fcst, region = "id") 1: broom::tidy(fcst, region = "id") ``` ### Session Info ``` R version 3.3.3 (2017-03-06) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Debian GNU/Linux 8 (jessie) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=C LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] maptools_0.9-2 sp_1.2-4 rgeos_0.3-23 rrricanes_0.2.0 ggplot2_2.2.1 dplyr_0.5.0 loaded via a namespace (and not attached): [1] Rcpp_0.12.10 git2r_0.18.0 plyr_1.8.4 tools_3.3.3 digest_0.6.12 [6] memoise_1.0.0 tibble_1.3.0 gtable_0.2.0 nlme_3.1-131 lattice_0.20-35 [11] psych_1.7.3.21 DBI_0.6-1 curl_2.5 rgdal_1.2-6 parallel_3.3.3 [16] rnaturalearthdata_0.1.0 withr_1.0.2 httr_1.2.1 stringr_1.2.0 devtools_1.12.0 [21] hms_0.3 grid_3.3.3 data.table_1.10.4 R6_2.2.0 foreign_0.8-67 [26] tidyr_0.6.1 readr_1.1.0 purrr_0.2.2 reshape2_1.4.2 magrittr_1.5 [31] scales_0.4.1 assertthat_0.2.0 mnormt_1.5-5 colorspace_1.3-2 labeling_0.3 [36] stringi_1.1.5 lazyeval_0.2.0 munsell_0.4.3 broom_0.4.2 ```
priority
broom tidy error istrue gpclibpermitstatus is not true error message error istrue gpclibpermitstatus is not true reproducible example r adv gis download fcst adv pgn fcst data id rownames fcst data fcst points broom tidy fcst region id traceback stop sprintf ngettext length r s is not true s are not all true ch call false domain na stopifnot istrue gpclibpermitstatus maptools unionspatialpolygons cp attr tidy spatialpolygonsdataframe fcst region id broom tidy fcst region id session info r version platform pc linux gnu bit running under debian gnu linux jessie locale lc ctype en us utf lc numeric c lc time en us utf lc collate en us utf lc monetary en us utf lc messages c lc paper en us utf lc name c lc address c lc telephone c lc measurement en us utf lc identification c attached base packages stats graphics grdevices utils datasets methods base other attached packages maptools sp rgeos rrricanes dplyr loaded via a namespace and not attached rcpp plyr tools digest memoise tibble gtable nlme lattice psych dbi curl rgdal parallel rnaturalearthdata withr httr stringr devtools hms grid data table foreign tidyr readr purrr magrittr scales assertthat mnormt colorspace labeling stringi lazyeval munsell broom
1
136,477
5,283,515,567
IssuesEvent
2017-02-07 21:35:52
PowerlineApp/powerline-mobile
https://api.github.com/repos/PowerlineApp/powerline-mobile
closed
Open Staging Issues
bug P1 - High Priority Staging/Prod URGENT
RESOLVED - Unable to add a comment.... Spinning loader icon shows, then goes away after 15s, and comment is never added. RESOLVED - Unable to upvote/downvote a post from the Newsfeed.... Busy icon (three dots) shows and post is never upvoted. Trying to upvote/downvote a post from the Item Detail screen fails as well. RESOLVED - Unable to register a new group... User fills out form, the app hangs on the spinning loader, and then an error is thrown. RESOLVED - Unable to register with e-mail address... User fills out complete form and it hangs on final loading sequence... I'm assuming these issues are interconnected and there's a different between dev configuration / dev db and staging environment.
1.0
Open Staging Issues - RESOLVED - Unable to add a comment.... Spinning loader icon shows, then goes away after 15s, and comment is never added. RESOLVED - Unable to upvote/downvote a post from the Newsfeed.... Busy icon (three dots) shows and post is never upvoted. Trying to upvote/downvote a post from the Item Detail screen fails as well. RESOLVED - Unable to register a new group... User fills out form, the app hangs on the spinning loader, and then an error is thrown. RESOLVED - Unable to register with e-mail address... User fills out complete form and it hangs on final loading sequence... I'm assuming these issues are interconnected and there's a different between dev configuration / dev db and staging environment.
priority
open staging issues resolved unable to add a comment spinning loader icon shows then goes away after and comment is never added resolved unable to upvote downvote a post from the newsfeed busy icon three dots shows and post is never upvoted trying to upvote downvote a post from the item detail screen fails as well resolved unable to register a new group user fills out form the app hangs on the spinning loader and then an error is thrown resolved unable to register with e mail address user fills out complete form and it hangs on final loading sequence i m assuming these issues are interconnected and there s a different between dev configuration dev db and staging environment
1
214,393
7,272,693,647
IssuesEvent
2018-02-21 00:25:14
minio/mc
https://api.github.com/repos/minio/mc
closed
heal: Update support for mc admin heal command.
priority: high
``` mc admin heal --force --remove --recursive --advanced --fake ALIAS[/Bucket/Prefix] ``` When only ALIAS is passed, heal the server and all the buckets. When ALIAS/bucket is passed, heal the server and the specific bucket. When ALIAS/bucket/prefix is passed, heal the server, the specific bucket and all objects under this prefix level When --recursive is passed for ALIAS , heal all buckets, all prefix, all objects. When --advanced is a slow operation, which will read all object content and verify bitrot. When --remove is passed with --force, will delete irreparable objects.
1.0
heal: Update support for mc admin heal command. - ``` mc admin heal --force --remove --recursive --advanced --fake ALIAS[/Bucket/Prefix] ``` When only ALIAS is passed, heal the server and all the buckets. When ALIAS/bucket is passed, heal the server and the specific bucket. When ALIAS/bucket/prefix is passed, heal the server, the specific bucket and all objects under this prefix level When --recursive is passed for ALIAS , heal all buckets, all prefix, all objects. When --advanced is a slow operation, which will read all object content and verify bitrot. When --remove is passed with --force, will delete irreparable objects.
priority
heal update support for mc admin heal command mc admin heal force remove recursive advanced fake alias when only alias is passed heal the server and all the buckets when alias bucket is passed heal the server and the specific bucket when alias bucket prefix is passed heal the server the specific bucket and all objects under this prefix level when recursive is passed for alias heal all buckets all prefix all objects when advanced is a slow operation which will read all object content and verify bitrot when remove is passed with force will delete irreparable objects
1
606,858
18,769,502,224
IssuesEvent
2021-11-06 15:19:14
gucio321/d2d2s
https://api.github.com/repos/gucio321/d2d2s
closed
d2sitems: cannot parse some itm's daat
bug High Priority
getting strange issue while parsing data of the following items: - Grand Charm of life - Chipped Ruby Currently no info about frequency of the behaviour
1.0
d2sitems: cannot parse some itm's daat - getting strange issue while parsing data of the following items: - Grand Charm of life - Chipped Ruby Currently no info about frequency of the behaviour
priority
cannot parse some itm s daat getting strange issue while parsing data of the following items grand charm of life chipped ruby currently no info about frequency of the behaviour
1
333,075
10,115,041,142
IssuesEvent
2019-07-30 20:42:53
BendroCorp/bendrocorp-app
https://api.github.com/repos/BendroCorp/bendrocorp-app
opened
Settings Menu can't be opened
bug priority:high
The settings menu throws a console error and will not open. ![image](https://user-images.githubusercontent.com/1551020/62163752-ad371f00-b2e0-11e9-9fdd-b276cec2c6cb.png)
1.0
Settings Menu can't be opened - The settings menu throws a console error and will not open. ![image](https://user-images.githubusercontent.com/1551020/62163752-ad371f00-b2e0-11e9-9fdd-b276cec2c6cb.png)
priority
settings menu can t be opened the settings menu throws a console error and will not open
1
286,090
8,783,888,998
IssuesEvent
2018-12-20 08:02:31
projectacrn/acrn-hypervisor
https://api.github.com/repos/projectacrn/acrn-hypervisor
closed
Bootloader: Root Key-of-trust
area: hypervisor priority: high status: implemented type: feature
The hypervisor virtual boot loader shall carry a read'-only root key of trust for signed guest OS verification
1.0
Bootloader: Root Key-of-trust - The hypervisor virtual boot loader shall carry a read'-only root key of trust for signed guest OS verification
priority
bootloader root key of trust the hypervisor virtual boot loader shall carry a read only root key of trust for signed guest os verification
1
806,713
29,870,282,961
IssuesEvent
2023-06-20 08:02:27
Field-Passer/newFieldPasser-BE
https://api.github.com/repos/Field-Passer/newFieldPasser-BE
closed
feat: 회원서비스 - 로그아웃 기능 구현
For: API Priority: High Status: In Progress Type: Feature
## Description(설명) JWT와 Redis를 활용한 로그아웃 기능 구현 ## Tasks(New feature) - [x] 로그아웃 기능 구현 ## References
1.0
feat: 회원서비스 - 로그아웃 기능 구현 - ## Description(설명) JWT와 Redis를 활용한 로그아웃 기능 구현 ## Tasks(New feature) - [x] 로그아웃 기능 구현 ## References
priority
feat 회원서비스 로그아웃 기능 구현 description 설명 jwt와 redis를 활용한 로그아웃 기능 구현 tasks new feature 로그아웃 기능 구현 references
1
94,370
3,924,980,317
IssuesEvent
2016-04-22 17:09:58
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
closed
Crystal field: the complex matrix of parameters should be zero initialised.
Component: Fitting Misc: Bugfix Priority: High
The `ComplexMatrix` is used to pass parameters to crystal field functions is allocated by the GSL and isn't initialized by default. Initialise it (and `ComplexVector` as well) in the constructor to be consistent with its double counterpart.
1.0
Crystal field: the complex matrix of parameters should be zero initialised. - The `ComplexMatrix` is used to pass parameters to crystal field functions is allocated by the GSL and isn't initialized by default. Initialise it (and `ComplexVector` as well) in the constructor to be consistent with its double counterpart.
priority
crystal field the complex matrix of parameters should be zero initialised the complexmatrix is used to pass parameters to crystal field functions is allocated by the gsl and isn t initialized by default initialise it and complexvector as well in the constructor to be consistent with its double counterpart
1
201,906
7,042,278,989
IssuesEvent
2017-12-30 10:02:54
NickBellamy/Testing-Grounds
https://api.github.com/repos/NickBellamy/Testing-Grounds
opened
Implement a Start menu
Priority: High Status: Available Type: Enhancement
Features should include: - Start - Quit "Options" can be added at a later date once the scope has been settled on. The Start menu should also have background music.
1.0
Implement a Start menu - Features should include: - Start - Quit "Options" can be added at a later date once the scope has been settled on. The Start menu should also have background music.
priority
implement a start menu features should include start quit options can be added at a later date once the scope has been settled on the start menu should also have background music
1
237,659
7,762,835,344
IssuesEvent
2018-06-01 14:42:39
AntoineSelectra/OPP3
https://api.github.com/repos/AntoineSelectra/OPP3
closed
Syncing with Zoho
High priority
- [x] Syncing does not work for records/11637 (almost fully completed) - do you why @AndrewTi ? - [x] with records/11638 (only name and fiscal code filled), sync seems to work but when trying to open the Contact in Zoho, it does not work --> it opens a Lead https://crm.zoho.com/crm/tab/Leads/1499247000293468109/ --> it says the Record is not available (probably because it tries to open a Lead with a Contact ID number)
1.0
Syncing with Zoho - - [x] Syncing does not work for records/11637 (almost fully completed) - do you why @AndrewTi ? - [x] with records/11638 (only name and fiscal code filled), sync seems to work but when trying to open the Contact in Zoho, it does not work --> it opens a Lead https://crm.zoho.com/crm/tab/Leads/1499247000293468109/ --> it says the Record is not available (probably because it tries to open a Lead with a Contact ID number)
priority
syncing with zoho syncing does not work for records almost fully completed do you why andrewti with records only name and fiscal code filled sync seems to work but when trying to open the contact in zoho it does not work it opens a lead it says the record is not available probably because it tries to open a lead with a contact id number
1
652,229
21,526,144,174
IssuesEvent
2022-04-28 18:39:23
isawnyu/pleiades-gazetteer
https://api.github.com/repos/isawnyu/pleiades-gazetteer
closed
show complex geometries on place and location maps instead of convex hulls: 5pts
enhancement priority: high maps
Currently, the maps shown on place and location pages display convex hulls around polyline and polygon geometries in locations, rather than the complete geometries themselves. These convex hulls are provided by code running in the plone when the javascript asks for the location geometries; they are not constructed by the map javascript itself. As a viewer of a map on a pleiades place or location page, I would like to see the complex geometry, not the convex hull. Note: I believe this behavior was introduced in order to avoid overwhelming browser clients with complex draw tasks for complex geometries (e.g., river systems imported from OSM). I doubt that this is any longer a concern with modern browsers and the javascript map frameworks we are using.
1.0
show complex geometries on place and location maps instead of convex hulls: 5pts - Currently, the maps shown on place and location pages display convex hulls around polyline and polygon geometries in locations, rather than the complete geometries themselves. These convex hulls are provided by code running in the plone when the javascript asks for the location geometries; they are not constructed by the map javascript itself. As a viewer of a map on a pleiades place or location page, I would like to see the complex geometry, not the convex hull. Note: I believe this behavior was introduced in order to avoid overwhelming browser clients with complex draw tasks for complex geometries (e.g., river systems imported from OSM). I doubt that this is any longer a concern with modern browsers and the javascript map frameworks we are using.
priority
show complex geometries on place and location maps instead of convex hulls currently the maps shown on place and location pages display convex hulls around polyline and polygon geometries in locations rather than the complete geometries themselves these convex hulls are provided by code running in the plone when the javascript asks for the location geometries they are not constructed by the map javascript itself as a viewer of a map on a pleiades place or location page i would like to see the complex geometry not the convex hull note i believe this behavior was introduced in order to avoid overwhelming browser clients with complex draw tasks for complex geometries e g river systems imported from osm i doubt that this is any longer a concern with modern browsers and the javascript map frameworks we are using
1
430,821
12,466,543,478
IssuesEvent
2020-05-28 15:38:02
oVirt/ovirt-web-ui
https://api.github.com/repos/oVirt/ovirt-web-ui
closed
Can't set disks as bootable
Flag: Needs QE Priority: High Severity: Medium Type: Bug
Disks cannot be set/unset as bootable. Also the first added disk should be set as bootable
1.0
Can't set disks as bootable - Disks cannot be set/unset as bootable. Also the first added disk should be set as bootable
priority
can t set disks as bootable disks cannot be set unset as bootable also the first added disk should be set as bootable
1
436,036
12,544,397,826
IssuesEvent
2020-06-05 17:08:11
poanetwork/blockscout
https://api.github.com/repos/poanetwork/blockscout
closed
Write a task that will delete non-consensus data by the schedule
indexing priority: high
After implementing of https://github.com/poanetwork/blockscout/pull/2886 Blockscout will never delete the non-consensus data from `token_transfers` and `logs` tables on-fly because of performance considerations. We need a task that will work with a schedule, let's say once a week, that will check the new non-consensus data and will delete all of them from the DB. The schedule should be configurable via ENV vars.
1.0
Write a task that will delete non-consensus data by the schedule - After implementing of https://github.com/poanetwork/blockscout/pull/2886 Blockscout will never delete the non-consensus data from `token_transfers` and `logs` tables on-fly because of performance considerations. We need a task that will work with a schedule, let's say once a week, that will check the new non-consensus data and will delete all of them from the DB. The schedule should be configurable via ENV vars.
priority
write a task that will delete non consensus data by the schedule after implementing of blockscout will never delete the non consensus data from token transfers and logs tables on fly because of performance considerations we need a task that will work with a schedule let s say once a week that will check the new non consensus data and will delete all of them from the db the schedule should be configurable via env vars
1
168,796
6,386,522,083
IssuesEvent
2017-08-03 11:21:32
ballerinalang/composer
https://api.github.com/repos/ballerinalang/composer
opened
Jump to source is only working for statements
Bug Priority:High Severity:Blocker
Release 0.91 Jump to source is only working for statements like assignments, variable declarations, break, continue etc. It is not working for loops, fork-join, try-catch etc.
1.0
Jump to source is only working for statements - Release 0.91 Jump to source is only working for statements like assignments, variable declarations, break, continue etc. It is not working for loops, fork-join, try-catch etc.
priority
jump to source is only working for statements release jump to source is only working for statements like assignments variable declarations break continue etc it is not working for loops fork join try catch etc
1
712,648
24,502,300,127
IssuesEvent
2022-10-10 13:43:03
AY2223S1-CS2103-F14-3/tp
https://api.github.com/repos/AY2223S1-CS2103-F14-3/tp
closed
As a user, I want to delete entries I have added in previously
type.Story priority.High
...so that I can change my mind about those entries.
1.0
As a user, I want to delete entries I have added in previously - ...so that I can change my mind about those entries.
priority
as a user i want to delete entries i have added in previously so that i can change my mind about those entries
1
307,618
9,419,413,878
IssuesEvent
2019-04-10 21:52:48
jdereus/labman
https://api.github.com/repos/jdereus/labman
reopened
Clarify save status of sample plating in interface
enhancement front-end priority:high scope:small
User asked "I have to save it [sample plate] where?", not understanding it is saved on the fly Raised in shotgun walkthrough 07/31/2018
1.0
Clarify save status of sample plating in interface - User asked "I have to save it [sample plate] where?", not understanding it is saved on the fly Raised in shotgun walkthrough 07/31/2018
priority
clarify save status of sample plating in interface user asked i have to save it where not understanding it is saved on the fly raised in shotgun walkthrough
1
827,252
31,762,154,238
IssuesEvent
2023-09-12 06:18:32
inspektor-gadget/inspektor-gadget
https://api.github.com/repos/inspektor-gadget/inspektor-gadget
closed
run: Support gadgets with `Attacher` interface
priority/high epic/containerized-gadgets
The initial support for containerized gadgets introduced in https://github.com/inspektor-gadget/inspektor-gadget/pull/1743 doesn't support gadgets that implement the [attacher interface](https://github.com/inspektor-gadget/inspektor-gadget/blob/0eeb7f19d53882e15dedf9247f6a331d4d1b0e14/pkg/operators/kubemanager/kubemanager.go#L47-L50) like trace dns and trace sni. Described in https://github.com/inspektor-gadget/inspektor-gadget/blob/main/docs/design/002-containerized-gadgets.md#socket_filter-programs ### Implementation options The current implementation of those gadgets is based on the networktracer (pkg/gadgets/internal/networktracer/tracer.go) that implements such a interface and attaches the eBPF programs to the different network namespaces of the containers when they're created. I believe we'd need to reused (or reimplement) the networktracer in the run command. Note that such interface is also used by the iterators programs in https://github.com/inspektor-gadget/inspektor-gadget/pull/1866
1.0
run: Support gadgets with `Attacher` interface - The initial support for containerized gadgets introduced in https://github.com/inspektor-gadget/inspektor-gadget/pull/1743 doesn't support gadgets that implement the [attacher interface](https://github.com/inspektor-gadget/inspektor-gadget/blob/0eeb7f19d53882e15dedf9247f6a331d4d1b0e14/pkg/operators/kubemanager/kubemanager.go#L47-L50) like trace dns and trace sni. Described in https://github.com/inspektor-gadget/inspektor-gadget/blob/main/docs/design/002-containerized-gadgets.md#socket_filter-programs ### Implementation options The current implementation of those gadgets is based on the networktracer (pkg/gadgets/internal/networktracer/tracer.go) that implements such a interface and attaches the eBPF programs to the different network namespaces of the containers when they're created. I believe we'd need to reused (or reimplement) the networktracer in the run command. Note that such interface is also used by the iterators programs in https://github.com/inspektor-gadget/inspektor-gadget/pull/1866
priority
run support gadgets with attacher interface the initial support for containerized gadgets introduced in doesn t support gadgets that implement the like trace dns and trace sni described in implementation options the current implementation of those gadgets is based on the networktracer pkg gadgets internal networktracer tracer go that implements such a interface and attaches the ebpf programs to the different network namespaces of the containers when they re created i believe we d need to reused or reimplement the networktracer in the run command note that such interface is also used by the iterators programs in
1
620,267
19,557,673,542
IssuesEvent
2022-01-03 12:02:18
bounswe/2021SpringGroup7
https://api.github.com/repos/bounswe/2021SpringGroup7
opened
CF-48 Edit Post
Status: In Progress Priority: High Frontend
User can edit his posts. Edit button is already created. I will implement the functionality.
1.0
CF-48 Edit Post - User can edit his posts. Edit button is already created. I will implement the functionality.
priority
cf edit post user can edit his posts edit button is already created i will implement the functionality
1
235,994
7,744,502,861
IssuesEvent
2018-05-29 15:33:15
vanilla-framework/vanilla-framework
https://api.github.com/repos/vanilla-framework/vanilla-framework
closed
The published NPM package should just be the contents of scss/
Priority: High
At the moment it includes 16MB of stuff that is irrelevant to the end user. ![screenshot_5](https://user-images.githubusercontent.com/25733845/40535835-2d4cb930-6002-11e8-9243-f4eda2b5b44e.png)
1.0
The published NPM package should just be the contents of scss/ - At the moment it includes 16MB of stuff that is irrelevant to the end user. ![screenshot_5](https://user-images.githubusercontent.com/25733845/40535835-2d4cb930-6002-11e8-9243-f4eda2b5b44e.png)
priority
the published npm package should just be the contents of scss at the moment it includes of stuff that is irrelevant to the end user
1
118,096
4,731,928,730
IssuesEvent
2016-10-19 05:12:14
FeraGroup/FTCVortexScoreCounter
https://api.github.com/repos/FeraGroup/FTCVortexScoreCounter
closed
latest pull from master branch does not work properly
bug High Priority
I did a "git pull master" today (10 /10/16) to test the latest code that has been checked in to the master branch. When I ran the project using Netbeans, I could see the settings window and assign gamepads to the vortex goals. However, the main Vortex Counter window was not visible and I saw errors in the run terminal of Netbeans: run: Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException at javax.swing.plaf.nimbus.NimbusStyle.validate(NimbusStyle.java:298) at javax.swing.plaf.nimbus.NimbusStyle.getValues(NimbusStyle.java:806) at javax.swing.plaf.nimbus.NimbusStyle.getInsets(NimbusStyle.java:485) at javax.swing.plaf.synth.SynthStyle.installDefaults(SynthStyle.java:913) at javax.swing.plaf.synth.SynthLookAndFeel.updateStyle(SynthLookAndFeel.java:265) at javax.swing.plaf.synth.SynthPanelUI.updateStyle(SynthPanelUI.java:117) at javax.swing.plaf.synth.SynthPanelUI.installDefaults(SynthPanelUI.java:100) at javax.swing.plaf.basic.BasicPanelUI.installUI(BasicPanelUI.java:56) at javax.swing.plaf.synth.SynthPanelUI.installUI(SynthPanelUI.java:62) at javax.swing.JComponent.setUI(JComponent.java:666) at javax.swing.JPanel.setUI(JPanel.java:153) at javax.swing.JPanel.updateUI(JPanel.java:126) at javax.swing.JPanel.<init>(JPanel.java:86) at javax.swing.JPanel.<init>(JPanel.java:109) at javax.swing.JPanel.<init>(JPanel.java:117) at javax.swing.JRootPane.createGlassPane(JRootPane.java:546) at javax.swing.JRootPane.<init>(JRootPane.java:366) at javax.swing.JFrame.createRootPane(JFrame.java:286) at javax.swing.JFrame.frameInit(JFrame.java:267) at javax.swing.JFrame.<init>(JFrame.java:190) at ftc.goal.counter.GoalCounterUI.<init>(GoalCounterUI.java:36) at ftc.goal.counter.GoalCounterUI$7.run(GoalCounterUI.java:982) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:756) at java.awt.EventQueue.access$500(EventQueue.java:97) at java.awt.EventQueue$3.run(EventQueue.java:709) at java.awt.EventQueue$3.run(EventQueue.java:703) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76) at java.awt.EventQueue.dispatchEvent(EventQueue.java:726) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) Exception in thread "main" java.lang.NullPointerException at ftc.goal.counter.GoalCounterUI.spinnersync(GoalCounterUI.java:41) at ftc.goal.counter.JoystickTest.startShowingControllerData(JoystickTest.java:127) at ftc.goal.counter.JoystickTest.<init>(JoystickTest.java:89) at ftc.goal.counter.GoalCounterUI.main(GoalCounterUI.java:990) @afera15
1.0
latest pull from master branch does not work properly - I did a "git pull master" today (10 /10/16) to test the latest code that has been checked in to the master branch. When I ran the project using Netbeans, I could see the settings window and assign gamepads to the vortex goals. However, the main Vortex Counter window was not visible and I saw errors in the run terminal of Netbeans: run: Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException at javax.swing.plaf.nimbus.NimbusStyle.validate(NimbusStyle.java:298) at javax.swing.plaf.nimbus.NimbusStyle.getValues(NimbusStyle.java:806) at javax.swing.plaf.nimbus.NimbusStyle.getInsets(NimbusStyle.java:485) at javax.swing.plaf.synth.SynthStyle.installDefaults(SynthStyle.java:913) at javax.swing.plaf.synth.SynthLookAndFeel.updateStyle(SynthLookAndFeel.java:265) at javax.swing.plaf.synth.SynthPanelUI.updateStyle(SynthPanelUI.java:117) at javax.swing.plaf.synth.SynthPanelUI.installDefaults(SynthPanelUI.java:100) at javax.swing.plaf.basic.BasicPanelUI.installUI(BasicPanelUI.java:56) at javax.swing.plaf.synth.SynthPanelUI.installUI(SynthPanelUI.java:62) at javax.swing.JComponent.setUI(JComponent.java:666) at javax.swing.JPanel.setUI(JPanel.java:153) at javax.swing.JPanel.updateUI(JPanel.java:126) at javax.swing.JPanel.<init>(JPanel.java:86) at javax.swing.JPanel.<init>(JPanel.java:109) at javax.swing.JPanel.<init>(JPanel.java:117) at javax.swing.JRootPane.createGlassPane(JRootPane.java:546) at javax.swing.JRootPane.<init>(JRootPane.java:366) at javax.swing.JFrame.createRootPane(JFrame.java:286) at javax.swing.JFrame.frameInit(JFrame.java:267) at javax.swing.JFrame.<init>(JFrame.java:190) at ftc.goal.counter.GoalCounterUI.<init>(GoalCounterUI.java:36) at ftc.goal.counter.GoalCounterUI$7.run(GoalCounterUI.java:982) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:311) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:756) at java.awt.EventQueue.access$500(EventQueue.java:97) at java.awt.EventQueue$3.run(EventQueue.java:709) at java.awt.EventQueue$3.run(EventQueue.java:703) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:76) at java.awt.EventQueue.dispatchEvent(EventQueue.java:726) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) Exception in thread "main" java.lang.NullPointerException at ftc.goal.counter.GoalCounterUI.spinnersync(GoalCounterUI.java:41) at ftc.goal.counter.JoystickTest.startShowingControllerData(JoystickTest.java:127) at ftc.goal.counter.JoystickTest.<init>(JoystickTest.java:89) at ftc.goal.counter.GoalCounterUI.main(GoalCounterUI.java:990) @afera15
priority
latest pull from master branch does not work properly i did a git pull master today to test the latest code that has been checked in to the master branch when i ran the project using netbeans i could see the settings window and assign gamepads to the vortex goals however the main vortex counter window was not visible and i saw errors in the run terminal of netbeans run exception in thread awt eventqueue java lang nullpointerexception at javax swing plaf nimbus nimbusstyle validate nimbusstyle java at javax swing plaf nimbus nimbusstyle getvalues nimbusstyle java at javax swing plaf nimbus nimbusstyle getinsets nimbusstyle java at javax swing plaf synth synthstyle installdefaults synthstyle java at javax swing plaf synth synthlookandfeel updatestyle synthlookandfeel java at javax swing plaf synth synthpanelui updatestyle synthpanelui java at javax swing plaf synth synthpanelui installdefaults synthpanelui java at javax swing plaf basic basicpanelui installui basicpanelui java at javax swing plaf synth synthpanelui installui synthpanelui java at javax swing jcomponent setui jcomponent java at javax swing jpanel setui jpanel java at javax swing jpanel updateui jpanel java at javax swing jpanel jpanel java at javax swing jpanel jpanel java at javax swing jpanel jpanel java at javax swing jrootpane createglasspane jrootpane java at javax swing jrootpane jrootpane java at javax swing jframe createrootpane jframe java at javax swing jframe frameinit jframe java at javax swing jframe jframe java at ftc goal counter goalcounterui goalcounterui java at ftc goal counter goalcounterui run goalcounterui java at java awt event invocationevent dispatch invocationevent java at java awt eventqueue dispatcheventimpl eventqueue java at java awt eventqueue access eventqueue java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java awt eventqueue dispatchevent eventqueue java at java awt eventdispatchthread pumponeeventforfilters eventdispatchthread java at java awt eventdispatchthread pumpeventsforfilter eventdispatchthread java at java awt eventdispatchthread pumpeventsforhierarchy eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread run eventdispatchthread java exception in thread main java lang nullpointerexception at ftc goal counter goalcounterui spinnersync goalcounterui java at ftc goal counter joysticktest startshowingcontrollerdata joysticktest java at ftc goal counter joysticktest joysticktest java at ftc goal counter goalcounterui main goalcounterui java
1
318,461
9,692,791,686
IssuesEvent
2019-05-24 14:35:09
python/mypy
https://api.github.com/repos/python/mypy
closed
New semantic analyzer: max iteration count reached when base class is bad
bug new-semantic-analyzer priority-0-high
This code causes max iteration count error when using the new semantic analyzer: ```py class C: class E: pass def f(self) -> None: class D(self.E): # Should generate error here pass ```
1.0
New semantic analyzer: max iteration count reached when base class is bad - This code causes max iteration count error when using the new semantic analyzer: ```py class C: class E: pass def f(self) -> None: class D(self.E): # Should generate error here pass ```
priority
new semantic analyzer max iteration count reached when base class is bad this code causes max iteration count error when using the new semantic analyzer py class c class e pass def f self none class d self e should generate error here pass
1
601,446
18,409,330,818
IssuesEvent
2021-10-13 02:19:43
TNG-dev/tachi-server
https://api.github.com/repos/TNG-dev/tachi-server
closed
ReprocessOrphan is never called
High Priority
Looks like the code that unorphans a score is never called. We should probably hook that into the score import lifecycle.
1.0
ReprocessOrphan is never called - Looks like the code that unorphans a score is never called. We should probably hook that into the score import lifecycle.
priority
reprocessorphan is never called looks like the code that unorphans a score is never called we should probably hook that into the score import lifecycle
1
729,445
25,127,049,572
IssuesEvent
2022-11-09 12:33:44
AY2223S1-CS2103T-W11-3/tp
https://api.github.com/repos/AY2223S1-CS2103T-W11-3/tp
closed
As an artist eager to improve my craft, I can review past feedback given by customers
type.Story priority.High
... so that I can better the quality of my subsequent works.
1.0
As an artist eager to improve my craft, I can review past feedback given by customers - ... so that I can better the quality of my subsequent works.
priority
as an artist eager to improve my craft i can review past feedback given by customers so that i can better the quality of my subsequent works
1
603,122
18,528,248,157
IssuesEvent
2021-10-21 00:19:17
AY2122S1-CS2103T-F13-3/tp
https://api.github.com/repos/AY2122S1-CS2103T-F13-3/tp
closed
Add tests for Undo and Redo feature
priority.High severity.Medium
- [ ] Tests for Undo - [ ] Tests for Redo - [ ] Tests for Undoable Commands
1.0
Add tests for Undo and Redo feature - - [ ] Tests for Undo - [ ] Tests for Redo - [ ] Tests for Undoable Commands
priority
add tests for undo and redo feature tests for undo tests for redo tests for undoable commands
1
323,766
9,878,807,086
IssuesEvent
2019-06-24 08:33:10
Code-Poets/sheetstorm
https://api.github.com/repos/Code-Poets/sheetstorm
closed
AttributeError: 'NoneType' object has no attribute 'isdigit in `ReportListCreateProjectJoinView`
bug priorityy high
Bug occurs when using month navigation bar Should be done: ------------ - - - -
1.0
AttributeError: 'NoneType' object has no attribute 'isdigit in `ReportListCreateProjectJoinView` - Bug occurs when using month navigation bar Should be done: ------------ - - - -
priority
attributeerror nonetype object has no attribute isdigit in reportlistcreateprojectjoinview bug occurs when using month navigation bar should be done
1
766,513
26,886,233,377
IssuesEvent
2023-02-06 03:36:48
Wonderland-Mobile/Issue-Tracker
https://api.github.com/repos/Wonderland-Mobile/Issue-Tracker
closed
Pressing Cancel on Change Language results in a blank screen
bug High Priority
_[From the "Stinky and Loof in Wonderland" issues document]_ Clicking cancel on the change language menu just closes the menu and brings you to a blank screen until you press escape.
1.0
Pressing Cancel on Change Language results in a blank screen - _[From the "Stinky and Loof in Wonderland" issues document]_ Clicking cancel on the change language menu just closes the menu and brings you to a blank screen until you press escape.
priority
pressing cancel on change language results in a blank screen clicking cancel on the change language menu just closes the menu and brings you to a blank screen until you press escape
1
312,476
9,547,814,565
IssuesEvent
2019-05-02 01:25:16
phetsims/sun
https://api.github.com/repos/phetsims/sun
closed
redesign number spinner interaction
dev:a11y priority:1-top priority:2-high
From https://github.com/phetsims/gravity-force-lab-basics/issues/109 Previous work here was done in https://github.com/phetsims/gravity-force-lab-basics/issues/62. I'm unsure if this will help Basically the issue is that on change value text isn't ever communicated to voice over when using a mac. Tested on safari (thanks @KatieWoe) and chrome (thanks @chrisklus). There was lots of investigation done in https://github.com/phetsims/gravity-force-lab-basics/issues/109 that landed us here as the culprit. When @chrisklus and I went to the scenery-phet demo for `NumberPicker` and sun demo for `NumberSpinner`, we found even in the most basic examples that voiceover didn't provide the aria-valuetext. I was very out of the loop when this was originally implemented, and as a result I'm unsure if this has been around for a long time, or is because of work done in https://github.com/phetsims/scenery/issues/951. Tagging @jessegreenberg so he is aware.
2.0
redesign number spinner interaction - From https://github.com/phetsims/gravity-force-lab-basics/issues/109 Previous work here was done in https://github.com/phetsims/gravity-force-lab-basics/issues/62. I'm unsure if this will help Basically the issue is that on change value text isn't ever communicated to voice over when using a mac. Tested on safari (thanks @KatieWoe) and chrome (thanks @chrisklus). There was lots of investigation done in https://github.com/phetsims/gravity-force-lab-basics/issues/109 that landed us here as the culprit. When @chrisklus and I went to the scenery-phet demo for `NumberPicker` and sun demo for `NumberSpinner`, we found even in the most basic examples that voiceover didn't provide the aria-valuetext. I was very out of the loop when this was originally implemented, and as a result I'm unsure if this has been around for a long time, or is because of work done in https://github.com/phetsims/scenery/issues/951. Tagging @jessegreenberg so he is aware.
priority
redesign number spinner interaction from previous work here was done in i m unsure if this will help basically the issue is that on change value text isn t ever communicated to voice over when using a mac tested on safari thanks katiewoe and chrome thanks chrisklus there was lots of investigation done in that landed us here as the culprit when chrisklus and i went to the scenery phet demo for numberpicker and sun demo for numberspinner we found even in the most basic examples that voiceover didn t provide the aria valuetext i was very out of the loop when this was originally implemented and as a result i m unsure if this has been around for a long time or is because of work done in tagging jessegreenberg so he is aware
1
808,840
30,113,244,791
IssuesEvent
2023-06-30 09:25:08
ImbueNetwork/imbue
https://api.github.com/repos/ImbueNetwork/imbue
opened
Rutnime api for getting the project account id
enhancement Priority | High
Given a project_id calculate and returun the project account id using project_account method.
1.0
Rutnime api for getting the project account id - Given a project_id calculate and returun the project account id using project_account method.
priority
rutnime api for getting the project account id given a project id calculate and returun the project account id using project account method
1
33,398
2,764,710,431
IssuesEvent
2015-04-29 16:47:53
biocore/qiita
https://api.github.com/repos/biocore/qiita
closed
Problem with moi adding a study
bug metanalysis priority: high
The following shows up in the tornado console when I select a study from the study list: ```python ERROR:tornado.application:Uncaught exception in /moi-ws/ Traceback (most recent call last): File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/tornado/websocket.py", line 303, in wrapper return callback(*args, **kwargs) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/tornado/web.py", line 2297, in wrapper return method(self, *args, **kwargs) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/moi/websocket.py", line 66, in on_message self.group.action(verb, args) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/moi/group.py", line 189, in action raise TypeError("args is unknown type: %s" % type(args)) TypeError: args is unknown type: <type 'dict'> ```
1.0
Problem with moi adding a study - The following shows up in the tornado console when I select a study from the study list: ```python ERROR:tornado.application:Uncaught exception in /moi-ws/ Traceback (most recent call last): File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/tornado/websocket.py", line 303, in wrapper return callback(*args, **kwargs) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/tornado/web.py", line 2297, in wrapper return method(self, *args, **kwargs) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/moi/websocket.py", line 66, in on_message self.group.action(verb, args) File "/Users/yoshikivazquezbaeza/.virtualenvs/qiita/lib/python2.7/site-packages/moi/group.py", line 189, in action raise TypeError("args is unknown type: %s" % type(args)) TypeError: args is unknown type: <type 'dict'> ```
priority
problem with moi adding a study the following shows up in the tornado console when i select a study from the study list python error tornado application uncaught exception in moi ws traceback most recent call last file users yoshikivazquezbaeza virtualenvs qiita lib site packages tornado websocket py line in wrapper return callback args kwargs file users yoshikivazquezbaeza virtualenvs qiita lib site packages tornado web py line in wrapper return method self args kwargs file users yoshikivazquezbaeza virtualenvs qiita lib site packages moi websocket py line in on message self group action verb args file users yoshikivazquezbaeza virtualenvs qiita lib site packages moi group py line in action raise typeerror args is unknown type s type args typeerror args is unknown type
1
203,457
7,064,576,024
IssuesEvent
2018-01-06 08:40:08
razi-rais/blockchain
https://api.github.com/repos/razi-rais/blockchain
closed
Timestamp Coloring on Dashboard
front-end high priority
Add timestamp coloring for the timestamp coming from the wikipedia API Within 24 hours: red Within 48 hours: blue 48 hours +: green
1.0
Timestamp Coloring on Dashboard - Add timestamp coloring for the timestamp coming from the wikipedia API Within 24 hours: red Within 48 hours: blue 48 hours +: green
priority
timestamp coloring on dashboard add timestamp coloring for the timestamp coming from the wikipedia api within hours red within hours blue hours green
1
697,764
23,952,272,484
IssuesEvent
2022-09-12 12:31:40
fractal-analytics-platform/fractal-tasks-core
https://api.github.com/repos/fractal-analytics-platform/fractal-tasks-core
closed
Have an example of Fractal running on a public dataset
High Priority
Let's have at least one example in the main Fractal repo that runs on a public dataset (see https://github.com/fractal-analytics-platform/fractal/issues/163) and produces relevant outputs that would be fitting for a Fractal demo.
1.0
Have an example of Fractal running on a public dataset - Let's have at least one example in the main Fractal repo that runs on a public dataset (see https://github.com/fractal-analytics-platform/fractal/issues/163) and produces relevant outputs that would be fitting for a Fractal demo.
priority
have an example of fractal running on a public dataset let s have at least one example in the main fractal repo that runs on a public dataset see and produces relevant outputs that would be fitting for a fractal demo
1
443,788
12,799,662,576
IssuesEvent
2020-07-02 15:44:42
BlueMap-Minecraft/BlueMap
https://api.github.com/repos/BlueMap-Minecraft/BlueMap
closed
string index out of bounds error on load of sponge plugin
bug high priority module: core
Hey, I am getting this error on the load of the sponge plugin: ``` java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(String.java:658) ~[?:1.8.0_162] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.getTexture(BlockModelResource.java:431) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.getTexture(BlockModelResource.java:434) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:306) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:287) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:287) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.build(BlockModelResource.java:258) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.loadModel(BlockStateResource.java:255) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.loadModels(BlockStateResource.java:235) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.buildForge(BlockStateResource.java:401) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.build(BlockStateResource.java:171) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.ResourcePack.load(ResourcePack.java:160) ~[ResourcePack.class:?] at de.bluecolored.bluemap.core.resourcepack.ResourcePack.load(ResourcePack.java:123) ~[ResourcePack.class:?] at de.bluecolored.bluemap.common.plugin.Plugin.load(Plugin.java:165) ~[Plugin.class:?] at de.bluecolored.bluemap.sponge.SpongePlugin.lambda$onServerStart$0(SpongePlugin.java:100) ~[SpongePlugin.class:?] at org.spongepowered.api.scheduler.Task$Builder.lambda$execute$0(Task.java:139) ~[Task$Builder.class:1.12.2-2838-7.2.1] at org.spongepowered.common.scheduler.SchedulerBase.lambda$startTask$0(SchedulerBase.java:197) ~[SchedulerBase.class:1.12.2-2838-7.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_162] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_162] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162] ``` Any idea what is causing it?
1.0
string index out of bounds error on load of sponge plugin - Hey, I am getting this error on the load of the sponge plugin: ``` java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(String.java:658) ~[?:1.8.0_162] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.getTexture(BlockModelResource.java:431) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.getTexture(BlockModelResource.java:434) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:306) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:287) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.buildNoReset(BlockModelResource.java:287) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockModelResource$Builder.build(BlockModelResource.java:258) ~[BlockModelResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.loadModel(BlockStateResource.java:255) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.loadModels(BlockStateResource.java:235) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.buildForge(BlockStateResource.java:401) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.BlockStateResource$Builder.build(BlockStateResource.java:171) ~[BlockStateResource$Builder.class:?] at de.bluecolored.bluemap.core.resourcepack.ResourcePack.load(ResourcePack.java:160) ~[ResourcePack.class:?] at de.bluecolored.bluemap.core.resourcepack.ResourcePack.load(ResourcePack.java:123) ~[ResourcePack.class:?] at de.bluecolored.bluemap.common.plugin.Plugin.load(Plugin.java:165) ~[Plugin.class:?] at de.bluecolored.bluemap.sponge.SpongePlugin.lambda$onServerStart$0(SpongePlugin.java:100) ~[SpongePlugin.class:?] at org.spongepowered.api.scheduler.Task$Builder.lambda$execute$0(Task.java:139) ~[Task$Builder.class:1.12.2-2838-7.2.1] at org.spongepowered.common.scheduler.SchedulerBase.lambda$startTask$0(SchedulerBase.java:197) ~[SchedulerBase.class:1.12.2-2838-7.2.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_162] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_162] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_162] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_162] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162] ``` Any idea what is causing it?
priority
string index out of bounds error on load of sponge plugin hey i am getting this error on the load of the sponge plugin java lang stringindexoutofboundsexception string index out of range at java lang string charat string java at de bluecolored bluemap core resourcepack blockmodelresource builder gettexture blockmodelresource java at de bluecolored bluemap core resourcepack blockmodelresource builder gettexture blockmodelresource java at de bluecolored bluemap core resourcepack blockmodelresource builder buildnoreset blockmodelresource java at de bluecolored bluemap core resourcepack blockmodelresource builder buildnoreset blockmodelresource java at de bluecolored bluemap core resourcepack blockmodelresource builder buildnoreset blockmodelresource java at de bluecolored bluemap core resourcepack blockmodelresource builder build blockmodelresource java at de bluecolored bluemap core resourcepack blockstateresource builder loadmodel blockstateresource java at de bluecolored bluemap core resourcepack blockstateresource builder loadmodels blockstateresource java at de bluecolored bluemap core resourcepack blockstateresource builder buildforge blockstateresource java at de bluecolored bluemap core resourcepack blockstateresource builder build blockstateresource java at de bluecolored bluemap core resourcepack resourcepack load resourcepack java at de bluecolored bluemap core resourcepack resourcepack load resourcepack java at de bluecolored bluemap common plugin plugin load plugin java at de bluecolored bluemap sponge spongeplugin lambda onserverstart spongeplugin java at org spongepowered api scheduler task builder lambda execute task java at org spongepowered common scheduler schedulerbase lambda starttask schedulerbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java any idea what is causing it
1
475,824
13,726,334,762
IssuesEvent
2020-10-03 23:07:32
yicheng-Pan/iwvg-devops-yicheng-pan
https://api.github.com/repos/yicheng-Pan/iwvg-devops-yicheng-pan
closed
Travis-CI
:15m points: 0.5 priority: high type: enhancement
Continuous integration with Travis-CI. Include Badge in README with link to Travis-CI account.
1.0
Travis-CI - Continuous integration with Travis-CI. Include Badge in README with link to Travis-CI account.
priority
travis ci continuous integration with travis ci include badge in readme with link to travis ci account
1
735,668
25,409,488,497
IssuesEvent
2022-11-22 17:42:42
Pictalk-speech-made-easy/pictalk-frontend
https://api.github.com/repos/Pictalk-speech-made-easy/pictalk-frontend
closed
Navbar should indicate which space we are on. (perso/shared/public)
Graphic enhancement adri High priority
1. Instead of multiple buttons of different spaces => One dropdown of different spaces. Dropdown indicates the space we are in. 2. Additionnaly we could slighly color the navbar in function of the space we are in
1.0
Navbar should indicate which space we are on. (perso/shared/public) - 1. Instead of multiple buttons of different spaces => One dropdown of different spaces. Dropdown indicates the space we are in. 2. Additionnaly we could slighly color the navbar in function of the space we are in
priority
navbar should indicate which space we are on perso shared public instead of multiple buttons of different spaces one dropdown of different spaces dropdown indicates the space we are in additionnaly we could slighly color the navbar in function of the space we are in
1
261,043
8,223,131,215
IssuesEvent
2018-09-06 09:38:41
GCE-NEIIST/GCE-NEIIST-webapp
https://api.github.com/repos/GCE-NEIIST/GCE-NEIIST-webapp
opened
Mobile optimizations
Category: User Interface Priority: High Type: Enhancement help wanted
-On Thesis: * Can't see the "Dissertation separator". * Course Tabs override theses' text. -Can't navigate through routes. -Takes a lot of time to load (use ng build --prod).
1.0
Mobile optimizations - -On Thesis: * Can't see the "Dissertation separator". * Course Tabs override theses' text. -Can't navigate through routes. -Takes a lot of time to load (use ng build --prod).
priority
mobile optimizations on thesis can t see the dissertation separator course tabs override theses text can t navigate through routes takes a lot of time to load use ng build prod
1
740,573
25,758,576,662
IssuesEvent
2022-12-08 18:23:40
bounswe/bounswe2022group4
https://api.github.com/repos/bounswe/bounswe2022group4
closed
Backend: Implementation of the Chat API
Category - To Do Priority - High Status: In Progress Difficulty - Hard Language - Python Backend
Users shall be able to send message to another user via our Chat API. Three endpoints will be implemented: POST /api/chat/send/message : To send message from one user to specified user. POST /api/chat/fetch/message : To fetch messages which sent by a specified user. GET /api/chat/fetch/users : To fetch list of users that current user has messaged before.
1.0
Backend: Implementation of the Chat API - Users shall be able to send message to another user via our Chat API. Three endpoints will be implemented: POST /api/chat/send/message : To send message from one user to specified user. POST /api/chat/fetch/message : To fetch messages which sent by a specified user. GET /api/chat/fetch/users : To fetch list of users that current user has messaged before.
priority
backend implementation of the chat api users shall be able to send message to another user via our chat api three endpoints will be implemented post api chat send message to send message from one user to specified user post api chat fetch message to fetch messages which sent by a specified user get api chat fetch users to fetch list of users that current user has messaged before
1
170,190
6,425,970,145
IssuesEvent
2017-08-09 16:25:45
guardianproject/phoneypot
https://api.github.com/repos/guardianproject/phoneypot
closed
Create "Alert Log" user interface to show reports of intrusion detection
enhancement for review high-priority
In-app "report" view. The pin-gated reporting view would display the timestamped list of which sensors alarmed at which times, ideally with image previews for all alarms. with preview, graph of severity/level, appropriate for type of sensor
1.0
Create "Alert Log" user interface to show reports of intrusion detection - In-app "report" view. The pin-gated reporting view would display the timestamped list of which sensors alarmed at which times, ideally with image previews for all alarms. with preview, graph of severity/level, appropriate for type of sensor
priority
create alert log user interface to show reports of intrusion detection in app report view the pin gated reporting view would display the timestamped list of which sensors alarmed at which times ideally with image previews for all alarms with preview graph of severity level appropriate for type of sensor
1
38,220
2,842,261,475
IssuesEvent
2015-05-28 08:14:59
soi-toolkit/soi-toolkit-mule
https://api.github.com/repos/soi-toolkit/soi-toolkit-mule
closed
Add support for Mule ESB EE v3.3.1
AffectsVersion-v0.6.0 BackwardCompatibility-Yes Component-commons-poms Milestone-Release0.6.2 Priority-High Type-Enhancement
Original [issue 304](https://code.google.com/p/soi-toolkit/issues/detail?id=304) created by soi-toolkit on 2012-10-09T16:25:14.000Z: Add support for Mule ESB EE v3.3.1 and its caching capabilities and other features.
1.0
Add support for Mule ESB EE v3.3.1 - Original [issue 304](https://code.google.com/p/soi-toolkit/issues/detail?id=304) created by soi-toolkit on 2012-10-09T16:25:14.000Z: Add support for Mule ESB EE v3.3.1 and its caching capabilities and other features.
priority
add support for mule esb ee original created by soi toolkit on add support for mule esb ee and its caching capabilities and other features
1
449,617
12,972,235,312
IssuesEvent
2020-07-21 12:16:26
minetest/minetest
https://api.github.com/repos/minetest/minetest
closed
Confirmation dialog broken on Android
Android Bug High priority
##### Minetest version ``` 5.3.0 ``` Posted on behalf of this user: https://twitter.com/SoBrasov/status/1282035117662273536 ##### OS / Hardware Operating system: Android 10 CPU: ? ##### Summary > [there was] a major issue on Android 10 - usernames and passwords could not be submitted from any text field. With the latest update this was fixed. Except for new users which need to confirm the password. And that field has the same issue. ##### Steps to reproduce Unknown
1.0
Confirmation dialog broken on Android - ##### Minetest version ``` 5.3.0 ``` Posted on behalf of this user: https://twitter.com/SoBrasov/status/1282035117662273536 ##### OS / Hardware Operating system: Android 10 CPU: ? ##### Summary > [there was] a major issue on Android 10 - usernames and passwords could not be submitted from any text field. With the latest update this was fixed. Except for new users which need to confirm the password. And that field has the same issue. ##### Steps to reproduce Unknown
priority
confirmation dialog broken on android minetest version posted on behalf of this user os hardware operating system android cpu summary a major issue on android usernames and passwords could not be submitted from any text field with the latest update this was fixed except for new users which need to confirm the password and that field has the same issue steps to reproduce unknown
1
86,135
3,702,876,346
IssuesEvent
2016-02-29 18:22:53
crosswire/xiphos
https://api.github.com/repos/crosswire/xiphos
closed
No attribute for title center
auto-migrated enhancement high priority sourceforge
Hi, I tried in many ways to move the title part under sections("s,s1,s2") to center. It won't work out on any way. I used Xiphos as viewer. Do you have any idea, Please share it. Thanks in advance. Reported by: *anonymous Original Ticket: [gnomesword/bugs/208](https://sourceforge.net/p/gnomesword/bugs/208)
1.0
No attribute for title center - Hi, I tried in many ways to move the title part under sections("s,s1,s2") to center. It won't work out on any way. I used Xiphos as viewer. Do you have any idea, Please share it. Thanks in advance. Reported by: *anonymous Original Ticket: [gnomesword/bugs/208](https://sourceforge.net/p/gnomesword/bugs/208)
priority
no attribute for title center hi i tried in many ways to move the title part under sections s to center it won t work out on any way i used xiphos as viewer do you have any idea please share it thanks in advance reported by anonymous original ticket
1
820,448
30,772,403,522
IssuesEvent
2023-07-31 01:44:13
steedos/steedos-platform
https://api.github.com/repos/steedos/steedos-platform
closed
[Bug]: 树形相关表字段选值时样式异常
bug done priority: High
### Description <img width="1920" alt="image" src="https://github.com/steedos/steedos-platform/assets/26241897/da357e62-e73e-4cd2-93b1-f3f2f1be8d28"> ### Steps To Reproduce 重现步骤 可以创建树形对象,选择父对象字段 ### Version 版本 2.5.9
1.0
[Bug]: 树形相关表字段选值时样式异常 - ### Description <img width="1920" alt="image" src="https://github.com/steedos/steedos-platform/assets/26241897/da357e62-e73e-4cd2-93b1-f3f2f1be8d28"> ### Steps To Reproduce 重现步骤 可以创建树形对象,选择父对象字段 ### Version 版本 2.5.9
priority
树形相关表字段选值时样式异常 description img width alt image src steps to reproduce 重现步骤 可以创建树形对象,选择父对象字段 version 版本
1
157,961
6,019,291,767
IssuesEvent
2017-06-07 14:14:37
fedora-infra/bodhi
https://api.github.com/repos/fedora-infra/bodhi
closed
Modify the CLI to work with multiple content types
Client Gibson High priority Refactor RFE
We will need to modify the CLI to handle multiple content types, once #1327 is done.
1.0
Modify the CLI to work with multiple content types - We will need to modify the CLI to handle multiple content types, once #1327 is done.
priority
modify the cli to work with multiple content types we will need to modify the cli to handle multiple content types once is done
1
374,949
11,097,702,377
IssuesEvent
2019-12-16 13:53:21
ess-dmsc/nexus-constructor
https://api.github.com/repos/ess-dmsc/nexus-constructor
closed
Allow specifying units when using a Kafka stream as a value
high priority tobias wish list
As a value for a field or transformation magnitude. My initial thought is to add an edit box for units to the widget when "kafka stream" option is selected, unless someone can already think of a use case for other attributes on datasets created by filewriter modules, in which case we need something more complicated. Validation should not be forgotten, we already have a validator for units. Related to DM-1789, requested by Tobias.
1.0
Allow specifying units when using a Kafka stream as a value - As a value for a field or transformation magnitude. My initial thought is to add an edit box for units to the widget when "kafka stream" option is selected, unless someone can already think of a use case for other attributes on datasets created by filewriter modules, in which case we need something more complicated. Validation should not be forgotten, we already have a validator for units. Related to DM-1789, requested by Tobias.
priority
allow specifying units when using a kafka stream as a value as a value for a field or transformation magnitude my initial thought is to add an edit box for units to the widget when kafka stream option is selected unless someone can already think of a use case for other attributes on datasets created by filewriter modules in which case we need something more complicated validation should not be forgotten we already have a validator for units related to dm requested by tobias
1
556,979
16,496,950,734
IssuesEvent
2021-05-25 11:24:05
IATI/ckanext-iati
https://api.github.com/repos/IATI/ckanext-iati
opened
Automatically generate API Key for new user accounts
High priority Q2 bug
When new user accounts are created the API Key is not getting generated automatically. **Expected behaviour** 1. When a new user registered via the _Register_ button the API Key should be automatically generated for them 2. When a sysadmin adds a new member to a publisher using the below screen the new user should have an automatic API key generated for them ![image](https://user-images.githubusercontent.com/74558657/119489865-0f81cf00-bd54-11eb-9ea9-9320171bd8a6.png)
1.0
Automatically generate API Key for new user accounts - When new user accounts are created the API Key is not getting generated automatically. **Expected behaviour** 1. When a new user registered via the _Register_ button the API Key should be automatically generated for them 2. When a sysadmin adds a new member to a publisher using the below screen the new user should have an automatic API key generated for them ![image](https://user-images.githubusercontent.com/74558657/119489865-0f81cf00-bd54-11eb-9ea9-9320171bd8a6.png)
priority
automatically generate api key for new user accounts when new user accounts are created the api key is not getting generated automatically expected behaviour when a new user registered via the register button the api key should be automatically generated for them when a sysadmin adds a new member to a publisher using the below screen the new user should have an automatic api key generated for them
1
479,639
13,804,083,224
IssuesEvent
2020-10-11 07:07:40
AY2021S1-TIC4001-2/tp
https://api.github.com/repos/AY2021S1-TIC4001-2/tp
closed
As a user I can add an expense category
priority.High type.Story
... so that I can classify an expense under a category.
1.0
As a user I can add an expense category - ... so that I can classify an expense under a category.
priority
as a user i can add an expense category so that i can classify an expense under a category
1
594,240
18,041,482,993
IssuesEvent
2021-09-18 05:32:13
francheska-vicente/cssweng
https://api.github.com/repos/francheska-vicente/cssweng
closed
When chosen time text field is blank, Javascript error occurs
bug priority: high severity: high issue: validation
### Summary: As the Chosen Time text field in Main Booking Screen and Room Management Screen allows inputs, it is possible to delete the text inside. This results in a Javascript error, causing the app to malfunction. ### Steps to Reproduce: 1. Go to Room Management or any Main Booking Screen 2. Delete text in Chosen Time field ### Visual Proof: ![image](https://user-images.githubusercontent.com/75743382/133723012-bfd8d537-5a0a-4de4-8a17-6925d1041238.png) ### Expected Results: - Text cannot be deleted ### Actual Results: - Javascript error | Additional Information | | | ----------- | ----------- | | Platform | V8 engine (Google) | | Operating System | Windows 10 |
1.0
When chosen time text field is blank, Javascript error occurs - ### Summary: As the Chosen Time text field in Main Booking Screen and Room Management Screen allows inputs, it is possible to delete the text inside. This results in a Javascript error, causing the app to malfunction. ### Steps to Reproduce: 1. Go to Room Management or any Main Booking Screen 2. Delete text in Chosen Time field ### Visual Proof: ![image](https://user-images.githubusercontent.com/75743382/133723012-bfd8d537-5a0a-4de4-8a17-6925d1041238.png) ### Expected Results: - Text cannot be deleted ### Actual Results: - Javascript error | Additional Information | | | ----------- | ----------- | | Platform | V8 engine (Google) | | Operating System | Windows 10 |
priority
when chosen time text field is blank javascript error occurs summary as the chosen time text field in main booking screen and room management screen allows inputs it is possible to delete the text inside this results in a javascript error causing the app to malfunction steps to reproduce go to room management or any main booking screen delete text in chosen time field visual proof expected results text cannot be deleted actual results javascript error additional information platform engine google operating system windows
1
807,855
30,021,265,959
IssuesEvent
2023-06-26 23:48:14
curiouslearning/FeedTheMonsterJS
https://api.github.com/repos/curiouslearning/FeedTheMonsterJS
closed
Research possibility to automate entire build and deployment process
High Priority
Ideally, we would like to get to a state where we can build and deploy new versions of FTM whenever changes are made to the Google Sheet that contains all the content and level information with only manual intervention when necessary. Would this be possible? How would we achieve it? What tools or services might we need? What would we need to build ourselves? Would we need to build anything ourselves? What supplementary services might we need to build to monitor the automated process?
1.0
Research possibility to automate entire build and deployment process - Ideally, we would like to get to a state where we can build and deploy new versions of FTM whenever changes are made to the Google Sheet that contains all the content and level information with only manual intervention when necessary. Would this be possible? How would we achieve it? What tools or services might we need? What would we need to build ourselves? Would we need to build anything ourselves? What supplementary services might we need to build to monitor the automated process?
priority
research possibility to automate entire build and deployment process ideally we would like to get to a state where we can build and deploy new versions of ftm whenever changes are made to the google sheet that contains all the content and level information with only manual intervention when necessary would this be possible how would we achieve it what tools or services might we need what would we need to build ourselves would we need to build anything ourselves what supplementary services might we need to build to monitor the automated process
1
252,590
8,037,926,842
IssuesEvent
2018-07-30 14:04:59
openshiftio/openshift.io
https://api.github.com/repos/openshiftio/openshift.io
closed
'popitem(): dictionary is empty'
SEV2-high area/analytics area/analytics/ingestion priority/P4 team/analytics type/bug
https://errortracking.prod-preview.openshift.io/openshift_io/fabric8-analytics-production/issues/5921/ ``` KeyError: 'popitem(): dictionary is empty' File "celery/app/trace.py", line 375, in trace_task R = retval = fun(*args, **kwargs) File "celery/app/trace.py", line 632, in __protected_call__ return self.run(*args, **kwargs) File "selinon/task_envelope.py", line 169, in run raise self.retry(max_retries=0, exc=exc) File "celery/app/task.py", line 668, in retry raise_with_context(exc) File "selinon/task_envelope.py", line 114, in run result = task.run(node_args) File "f8a_worker/base.py", line 106, in run raise exc File "f8a_worker/base.py", line 81, in run result = self.execute(node_args) File "f8a_worker/workers/dependency_snapshot.py", line 102, in execute resolved = self._resolve_dependency(ecosystem, dep) File "f8a_worker/workers/dependency_snapshot.py", line 74, in _resolve_dependency package, version = pkgspec.popitem() ```
1.0
'popitem(): dictionary is empty' - https://errortracking.prod-preview.openshift.io/openshift_io/fabric8-analytics-production/issues/5921/ ``` KeyError: 'popitem(): dictionary is empty' File "celery/app/trace.py", line 375, in trace_task R = retval = fun(*args, **kwargs) File "celery/app/trace.py", line 632, in __protected_call__ return self.run(*args, **kwargs) File "selinon/task_envelope.py", line 169, in run raise self.retry(max_retries=0, exc=exc) File "celery/app/task.py", line 668, in retry raise_with_context(exc) File "selinon/task_envelope.py", line 114, in run result = task.run(node_args) File "f8a_worker/base.py", line 106, in run raise exc File "f8a_worker/base.py", line 81, in run result = self.execute(node_args) File "f8a_worker/workers/dependency_snapshot.py", line 102, in execute resolved = self._resolve_dependency(ecosystem, dep) File "f8a_worker/workers/dependency_snapshot.py", line 74, in _resolve_dependency package, version = pkgspec.popitem() ```
priority
popitem dictionary is empty keyerror popitem dictionary is empty file celery app trace py line in trace task r retval fun args kwargs file celery app trace py line in protected call return self run args kwargs file selinon task envelope py line in run raise self retry max retries exc exc file celery app task py line in retry raise with context exc file selinon task envelope py line in run result task run node args file worker base py line in run raise exc file worker base py line in run result self execute node args file worker workers dependency snapshot py line in execute resolved self resolve dependency ecosystem dep file worker workers dependency snapshot py line in resolve dependency package version pkgspec popitem
1
186,226
6,734,496,133
IssuesEvent
2017-10-18 18:15:17
BaderLab/EnrichmentMapApp
https://api.github.com/repos/BaderLab/EnrichmentMapApp
opened
Different results when create EM using command vs using the interface (more specifically different results when using edb vs excel files)
high_priority
Creating an EM using a command and edb file creates a different network than when I create it using the interface. It is a rounding issue. For my example, In the EM created with command and the edb file there are 6 additional genesets. All six have an FDR of 0.01 (which is the specified threshold). When you look in the gsea results file all 6 genesets have FDR slightly over 0.01 but if you look in the edb file the p-value has been rounded by GSEA. **This is not an EM bug - the two different file types represent slightly different results because of the round** Should we eliminate the edb option? That is how GSEA interfaces directly with EM. Can we just put a disclaimer?
1.0
Different results when create EM using command vs using the interface (more specifically different results when using edb vs excel files) - Creating an EM using a command and edb file creates a different network than when I create it using the interface. It is a rounding issue. For my example, In the EM created with command and the edb file there are 6 additional genesets. All six have an FDR of 0.01 (which is the specified threshold). When you look in the gsea results file all 6 genesets have FDR slightly over 0.01 but if you look in the edb file the p-value has been rounded by GSEA. **This is not an EM bug - the two different file types represent slightly different results because of the round** Should we eliminate the edb option? That is how GSEA interfaces directly with EM. Can we just put a disclaimer?
priority
different results when create em using command vs using the interface more specifically different results when using edb vs excel files creating an em using a command and edb file creates a different network than when i create it using the interface it is a rounding issue for my example in the em created with command and the edb file there are additional genesets all six have an fdr of which is the specified threshold when you look in the gsea results file all genesets have fdr slightly over but if you look in the edb file the p value has been rounded by gsea this is not an em bug the two different file types represent slightly different results because of the round should we eliminate the edb option that is how gsea interfaces directly with em can we just put a disclaimer
1
568,896
16,990,210,821
IssuesEvent
2021-06-30 19:21:59
gymh/windows
https://api.github.com/repos/gymh/windows
opened
Url is invalid
Waiting for answers .... ✅ Confirmed ❗ high Priority 🐞 Bug
Hallo @philippdormann, könntest du auf deiner Domain Url-Weiterleitungen für diese Links erstellen und diese Möglichst kurz und ohne sonderzeichen halten? Links: https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=mo https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=di https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=mi https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=do https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=fr Danke! Das Problem ist, dass VS ein & als Zeichen für eine Sequenz erkennt und ich nicht unbedingt das noch reinhauen möchte und es so viel einfacher wäre wenn du die weiterleitunge erstellst.
1.0
Url is invalid - Hallo @philippdormann, könntest du auf deiner Domain Url-Weiterleitungen für diese Links erstellen und diese Möglichst kurz und ohne sonderzeichen halten? Links: https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=mo https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=di https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=mi https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=do https://gymh.philippdormann.de/vertretungsplan/?f=&display-lehrer-full&d=fr Danke! Das Problem ist, dass VS ein & als Zeichen für eine Sequenz erkennt und ich nicht unbedingt das noch reinhauen möchte und es so viel einfacher wäre wenn du die weiterleitunge erstellst.
priority
url is invalid hallo philippdormann könntest du auf deiner domain url weiterleitungen für diese links erstellen und diese möglichst kurz und ohne sonderzeichen halten links danke das problem ist dass vs ein als zeichen für eine sequenz erkennt und ich nicht unbedingt das noch reinhauen möchte und es so viel einfacher wäre wenn du die weiterleitunge erstellst
1
503,505
14,593,166,533
IssuesEvent
2020-12-19 21:17:29
swharden/ScottPlot
https://api.github.com/repos/swharden/ScottPlot
closed
New Cookbook / FAQ / Quickstart website generator
HIGH PRIORITY
I am working to improve the cookbook system * The ultimate goal is a simpler cookbook * The new cookbook will use generated HTML (not markdown) * The new cookbook will have multiple pages * Every `IRecipe` has a `String Category` to indicate what page it belongs to * A converter isolates recipe code into flat text files to simplify source code distribution with the demo program
1.0
New Cookbook / FAQ / Quickstart website generator - I am working to improve the cookbook system * The ultimate goal is a simpler cookbook * The new cookbook will use generated HTML (not markdown) * The new cookbook will have multiple pages * Every `IRecipe` has a `String Category` to indicate what page it belongs to * A converter isolates recipe code into flat text files to simplify source code distribution with the demo program
priority
new cookbook faq quickstart website generator i am working to improve the cookbook system the ultimate goal is a simpler cookbook the new cookbook will use generated html not markdown the new cookbook will have multiple pages every irecipe has a string category to indicate what page it belongs to a converter isolates recipe code into flat text files to simplify source code distribution with the demo program
1
594,297
18,042,580,190
IssuesEvent
2021-09-18 09:45:09
edwisely-ai/Relationship-Management
https://api.github.com/repos/edwisely-ai/Relationship-Management
opened
RMK Next Gen - Student Question Pallete Not aligned properly or Disturbed
Criticality Medium Priority High
1. Taking test in RMK Next Gen - Student App and found that Question Palletes are Not aligned properly or Disturbed. _Screenshot FYR_ ![image.png](https://images.zenhubusercontent.com/60530434593ffb59252f11d7/418e9ea4-4959-4fb5-bf42-798226ebd923)
1.0
RMK Next Gen - Student Question Pallete Not aligned properly or Disturbed - 1. Taking test in RMK Next Gen - Student App and found that Question Palletes are Not aligned properly or Disturbed. _Screenshot FYR_ ![image.png](https://images.zenhubusercontent.com/60530434593ffb59252f11d7/418e9ea4-4959-4fb5-bf42-798226ebd923)
priority
rmk next gen student question pallete not aligned properly or disturbed taking test in rmk next gen student app and found that question palletes are not aligned properly or disturbed screenshot fyr
1
415,801
12,134,899,131
IssuesEvent
2020-04-23 11:34:55
yairEO/tagify
https://api.github.com/repos/yairEO/tagify
closed
[React] value.join() & value.trim() issues when rendering with props from parent component
Bug: high priority
I'm creating a react example (**mix mode**) and I'm loading an initial whitelist array and value in the right format as documentation suggest `whitelist = [ {id: "497965f7", value: "Loaf pan"}, {id: "3de12171", value: "10-inch plate"} ]` `initialValue = "Use [[10-inch plate]] or [[Loaf pan]]"` but I'm getting some errors when rendering. **Sending initialValue as string, `value.join()` is crashing** <img width="807" alt="Screenshot1" src="https://user-images.githubusercontent.com/24323868/80037834-61598500-84c2-11ea-8590-cf2513eb8535.png"> **Sending initialValue as array, `value.join()` works and `value.trim()` crashes at some point** **Is there something I'm missing to make it work?** <img width="678" alt="Screenshot2" src="https://user-images.githubusercontent.com/24323868/80037836-61f21b80-84c2-11ea-83ac-00c5a3acabd6.png"> <img width="691" alt="Screenshot3" src="https://user-images.githubusercontent.com/24323868/80037837-61f21b80-84c2-11ea-8354-17704899615b.png">
1.0
[React] value.join() & value.trim() issues when rendering with props from parent component - I'm creating a react example (**mix mode**) and I'm loading an initial whitelist array and value in the right format as documentation suggest `whitelist = [ {id: "497965f7", value: "Loaf pan"}, {id: "3de12171", value: "10-inch plate"} ]` `initialValue = "Use [[10-inch plate]] or [[Loaf pan]]"` but I'm getting some errors when rendering. **Sending initialValue as string, `value.join()` is crashing** <img width="807" alt="Screenshot1" src="https://user-images.githubusercontent.com/24323868/80037834-61598500-84c2-11ea-8590-cf2513eb8535.png"> **Sending initialValue as array, `value.join()` works and `value.trim()` crashes at some point** **Is there something I'm missing to make it work?** <img width="678" alt="Screenshot2" src="https://user-images.githubusercontent.com/24323868/80037836-61f21b80-84c2-11ea-83ac-00c5a3acabd6.png"> <img width="691" alt="Screenshot3" src="https://user-images.githubusercontent.com/24323868/80037837-61f21b80-84c2-11ea-8354-17704899615b.png">
priority
value join value trim issues when rendering with props from parent component i m creating a react example mix mode and i m loading an initial whitelist array and value in the right format as documentation suggest whitelist initialvalue use or but i m getting some errors when rendering sending initialvalue as string value join is crashing img width alt src sending initialvalue as array value join works and value trim crashes at some point is there something i m missing to make it work img width alt src img width alt src
1
100,228
4,081,477,670
IssuesEvent
2016-05-31 09:03:55
klusekrules/space-explorers
https://api.github.com/repos/klusekrules/space-explorers
closed
Tryby włączania aplikacji
priority - high type - change
Implementacja trybów aplikacji. Włączenie aplikacji w trybie klient. Włączenie aplikacji w trybie serwer.
1.0
Tryby włączania aplikacji - Implementacja trybów aplikacji. Włączenie aplikacji w trybie klient. Włączenie aplikacji w trybie serwer.
priority
tryby włączania aplikacji implementacja trybów aplikacji włączenie aplikacji w trybie klient włączenie aplikacji w trybie serwer
1
178,861
6,619,471,403
IssuesEvent
2017-09-21 12:23:51
grmToolbox/grmpy
https://api.github.com/repos/grmToolbox/grmpy
closed
Update Regression Tests
pb-estimation priority-high size-M
Please include in our regression test a single evaluation of the criterion function at the starting values in addition to the overall statistic on the simulated dataset.
1.0
Update Regression Tests - Please include in our regression test a single evaluation of the criterion function at the starting values in addition to the overall statistic on the simulated dataset.
priority
update regression tests please include in our regression test a single evaluation of the criterion function at the starting values in addition to the overall statistic on the simulated dataset
1
152,128
5,833,435,457
IssuesEvent
2017-05-09 01:35:17
CAGoodman/CareWheelsCorp
https://api.github.com/repos/CAGoodman/CareWheelsCorp
closed
User name in login should be case dependent
bug High Priority
Currently, login authentication is case independent. That is, "testalice" and "TestAlice" both succeed in logging in. However a successful login with "TestAlice" is followed by a subsequent failure since the uregistered user name is "testalice". We should ensure that case is preserved. One solution is to verify that there is a (case-sensitive) match of a returned user name when accessing the group info login. If not, a login error should be indicated.
1.0
User name in login should be case dependent - Currently, login authentication is case independent. That is, "testalice" and "TestAlice" both succeed in logging in. However a successful login with "TestAlice" is followed by a subsequent failure since the uregistered user name is "testalice". We should ensure that case is preserved. One solution is to verify that there is a (case-sensitive) match of a returned user name when accessing the group info login. If not, a login error should be indicated.
priority
user name in login should be case dependent currently login authentication is case independent that is testalice and testalice both succeed in logging in however a successful login with testalice is followed by a subsequent failure since the uregistered user name is testalice we should ensure that case is preserved one solution is to verify that there is a case sensitive match of a returned user name when accessing the group info login if not a login error should be indicated
1
93,170
3,886,586,606
IssuesEvent
2016-04-14 02:02:09
smartchicago/chicago-early-learning
https://api.github.com/repos/smartchicago/chicago-early-learning
closed
Create way for Compare & Contact data to be exported to CSV through admin tool
High Priority
As part of the Compare & Contact feature (#408), we want to add a way for admins to be able to filter and export data about the users who are using the Compare and Contact feature to contact their locations.
1.0
Create way for Compare & Contact data to be exported to CSV through admin tool - As part of the Compare & Contact feature (#408), we want to add a way for admins to be able to filter and export data about the users who are using the Compare and Contact feature to contact their locations.
priority
create way for compare contact data to be exported to csv through admin tool as part of the compare contact feature we want to add a way for admins to be able to filter and export data about the users who are using the compare and contact feature to contact their locations
1
623,311
19,665,033,716
IssuesEvent
2022-01-10 21:23:36
lorenzwalthert/precommit
https://api.github.com/repos/lorenzwalthert/precommit
closed
{precommit} can't be used when a {renv} is already present in repo
Complexity: High Priority: Critical Status: Unassigned Type: Bug
Similar to #316, I'm having trouble. I recently updated from 0.1.3 to 0.2.0, and I can't get anything working again. I followed the updating instructions which failed, and then tried to remove everything and try with a fresh installation. It doesn't seems to have something to do with the fact I use renv -- I tried deactivating it and still have the same issues. I'm on Windows 10 with R 4.1.1. ``` style-files..............................................................Failed - hook id: style-files - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in packageVersion("precommit") : there is no package called 'precommit' Execution halted spell-check..............................................................Failed - hook id: spell-check - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'docopt' Calls: loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted lintr....................................................................Failed - hook id: lintr - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'docopt' Calls: loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted readme-rmd-rendered..................................(no files to check)Skippedparsable-R...............................................................Failed - hook id: parsable-R - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'knitr' Calls: lapply ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted no-browser-statement.....................................................PassedCheck for added large files..............................................PassedFix End of Files.........................................................Passed Don't commit common R artifacts......................(no files to check)Skipped [INFO] Restored changes from C:\Users\Sean.Ingerson\.cache\pre-commit\patch1638996953-4064. ```
1.0
{precommit} can't be used when a {renv} is already present in repo - Similar to #316, I'm having trouble. I recently updated from 0.1.3 to 0.2.0, and I can't get anything working again. I followed the updating instructions which failed, and then tried to remove everything and try with a fresh installation. It doesn't seems to have something to do with the fact I use renv -- I tried deactivating it and still have the same issues. I'm on Windows 10 with R 4.1.1. ``` style-files..............................................................Failed - hook id: style-files - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in packageVersion("precommit") : there is no package called 'precommit' Execution halted spell-check..............................................................Failed - hook id: spell-check - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'docopt' Calls: loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted lintr....................................................................Failed - hook id: lintr - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'docopt' Calls: loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted readme-rmd-rendered..................................(no files to check)Skippedparsable-R...............................................................Failed - hook id: parsable-R - exit code: 1 Warning: Setting LC_COLLATE=en_US.UTF-8 failed Warning: Setting LC_CTYPE=en_US.UTF-8 failed Warning: Setting LC_MONETARY=en_US.UTF-8 failed Warning: Setting LC_TIME=en_US.UTF-8 failed Error in loadNamespace(x) : there is no package called 'knitr' Calls: lapply ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted no-browser-statement.....................................................PassedCheck for added large files..............................................PassedFix End of Files.........................................................Passed Don't commit common R artifacts......................(no files to check)Skipped [INFO] Restored changes from C:\Users\Sean.Ingerson\.cache\pre-commit\patch1638996953-4064. ```
priority
precommit can t be used when a renv is already present in repo similar to i m having trouble i recently updated from to and i can t get anything working again i followed the updating instructions which failed and then tried to remove everything and try with a fresh installation it doesn t seems to have something to do with the fact i use renv i tried deactivating it and still have the same issues i m on windows with r style files failed hook id style files exit code warning setting lc collate en us utf failed warning setting lc ctype en us utf failed warning setting lc monetary en us utf failed warning setting lc time en us utf failed error in packageversion precommit there is no package called precommit execution halted spell check failed hook id spell check exit code warning setting lc collate en us utf failed warning setting lc ctype en us utf failed warning setting lc monetary en us utf failed warning setting lc time en us utf failed error in loadnamespace x there is no package called docopt calls loadnamespace withrestarts withonerestart dowithonerestart execution halted lintr failed hook id lintr exit code warning setting lc collate en us utf failed warning setting lc ctype en us utf failed warning setting lc monetary en us utf failed warning setting lc time en us utf failed error in loadnamespace x there is no package called docopt calls loadnamespace withrestarts withonerestart dowithonerestart execution halted readme rmd rendered no files to check skippedparsable r failed hook id parsable r exit code warning setting lc collate en us utf failed warning setting lc ctype en us utf failed warning setting lc monetary en us utf failed warning setting lc time en us utf failed error in loadnamespace x there is no package called knitr calls lapply loadnamespace withrestarts withonerestart dowithonerestart execution halted no browser statement passedcheck for added large files passedfix end of files passed don t commit common r artifacts no files to check skipped restored changes from c users sean ingerson cache pre commit
1
199,559
6,991,623,004
IssuesEvent
2017-12-15 01:04:48
Teradata/covalent
https://api.github.com/repos/Teradata/covalent
closed
[Performance] Make sure all our elements work under `OnPush`.
enhancement epic high priority
#### Feature Request Normally `angular`'s change detection is by default set as `always`.. this has some performance issues, so we need to make sure our components detect changes only when needed, and call the `ChangeDetectorRef#markForCheck()` when we need to manually call one. - [x] Common (Pipes) <-- they need to be pure - [x] Chips - [x] Data Table - [x] Expansion Panel - [x] File - [x] Json Formatter - [x] Layout - [x] Loading - [x] Menu - [x] Notifications - [x] Paging - [x] Search - [x] Steps - [x] Dynamic Forms - [x] Highlight - [x] Markdown
1.0
[Performance] Make sure all our elements work under `OnPush`. - #### Feature Request Normally `angular`'s change detection is by default set as `always`.. this has some performance issues, so we need to make sure our components detect changes only when needed, and call the `ChangeDetectorRef#markForCheck()` when we need to manually call one. - [x] Common (Pipes) <-- they need to be pure - [x] Chips - [x] Data Table - [x] Expansion Panel - [x] File - [x] Json Formatter - [x] Layout - [x] Loading - [x] Menu - [x] Notifications - [x] Paging - [x] Search - [x] Steps - [x] Dynamic Forms - [x] Highlight - [x] Markdown
priority
make sure all our elements work under onpush feature request normally angular s change detection is by default set as always this has some performance issues so we need to make sure our components detect changes only when needed and call the changedetectorref markforcheck when we need to manually call one common pipes they need to be pure chips data table expansion panel file json formatter layout loading menu notifications paging search steps dynamic forms highlight markdown
1
704,568
24,201,050,997
IssuesEvent
2022-09-24 15:18:47
fyusuf-a/ft_transcendence
https://api.github.com/repos/fyusuf-a/ft_transcendence
closed
Feat: the users need to have a status (online, offline, in a game, and other?)
enhancement backend HIGH PRIORITY
The users need to have a status. It should be display on the Friends list (Profile page)
1.0
Feat: the users need to have a status (online, offline, in a game, and other?) - The users need to have a status. It should be display on the Friends list (Profile page)
priority
feat the users need to have a status online offline in a game and other the users need to have a status it should be display on the friends list profile page
1
126,385
4,990,131,697
IssuesEvent
2016-12-08 14:14:47
poggit/poggit
https://api.github.com/repos/poggit/poggit
opened
(Unconfirmed) (Security) LoadBuildHistoryAjax security leak
Category: High priority Catgory: Bug
LoadBuildHistoryAjax does not check whether the requesting user scope has permission to view the requested project. This results in security leak for projects in private repos where Elvin can query Poggit over an increment to see hidden projects.
1.0
(Unconfirmed) (Security) LoadBuildHistoryAjax security leak - LoadBuildHistoryAjax does not check whether the requesting user scope has permission to view the requested project. This results in security leak for projects in private repos where Elvin can query Poggit over an increment to see hidden projects.
priority
unconfirmed security loadbuildhistoryajax security leak loadbuildhistoryajax does not check whether the requesting user scope has permission to view the requested project this results in security leak for projects in private repos where elvin can query poggit over an increment to see hidden projects
1
324,285
9,887,116,262
IssuesEvent
2019-06-25 08:29:38
opentargets/genetics
https://api.github.com/repos/opentargets/genetics
closed
merge of aggregated coloc results onto the V2D table that serves the Manhattan table
Kind: New feature Priority: High
[example](https://gist.github.com/edm1/613dbdd30232495a4924dd0afdaa5571) of the aggregation to do in order to incorporate the data
1.0
merge of aggregated coloc results onto the V2D table that serves the Manhattan table - [example](https://gist.github.com/edm1/613dbdd30232495a4924dd0afdaa5571) of the aggregation to do in order to incorporate the data
priority
merge of aggregated coloc results onto the table that serves the manhattan table of the aggregation to do in order to incorporate the data
1