Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,292 | 26,744,858,418 | IssuesEvent | 2023-01-30 15:21:20 | precice/precice | https://api.github.com/repos/precice/precice | closed | Deprecate getMeshVertexIDsFromPositions and getMeshVertices? | usability maintainability good first issue | Is the function `SolverInterface::getMeshVertexIDsFromPositions` needed? I have never used it and it would be a candidate for deletion for v3.0.0 from my perspective.
There was some discussion about this API function in https://github.com/precice/precice/issues/374, but besides this it looks mostly unused. As far as I see it there are also no tests for this function. | True | Deprecate getMeshVertexIDsFromPositions and getMeshVertices? - Is the function `SolverInterface::getMeshVertexIDsFromPositions` needed? I have never used it and it would be a candidate for deletion for v3.0.0 from my perspective.
There was some discussion about this API function in https://github.com/precice/precice/issues/374, but besides this it looks mostly unused. As far as I see it there are also no tests for this function. | main | deprecate getmeshvertexidsfrompositions and getmeshvertices is the function solverinterface getmeshvertexidsfrompositions needed i have never used it and it would be a candidate for deletion for from my perspective there was some discussion about this api function in but besides this it looks mostly unused as far as i see it there are also no tests for this function | 1 |
1,494 | 6,475,868,840 | IssuesEvent | 2017-08-17 21:22:41 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Monit 5.18 has new output format for summary which breaks the module | affects_1.7 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monit
##### ANSIBLE VERSION
ansible 1.7.2
##### OS / ENVIRONMENT
Linux - Debian
##### SUMMARY
Monit 5.18 has a new output format incompatible with the current module
##### STEPS TO REPRODUCE
Install Monit
Configure any process
```
ansible localhost -m monit -a "name=zookeeper state=restarted"
```
Get error message '"zookeeper process not presently configured with monit' which is not accurate
##### EXPECTED RESULTS
Should restart the process
##### ACTUAL RESULTS
Get error message '"zookeeper process not presently configured with monit' which is not accurate
| True | Monit 5.18 has new output format for summary which breaks the module - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monit
##### ANSIBLE VERSION
ansible 1.7.2
##### OS / ENVIRONMENT
Linux - Debian
##### SUMMARY
Monit 5.18 has a new output format incompatible with the current module
##### STEPS TO REPRODUCE
Install Monit
Configure any process
```
ansible localhost -m monit -a "name=zookeeper state=restarted"
```
Get error message '"zookeeper process not presently configured with monit' which is not accurate
##### EXPECTED RESULTS
Should restart the process
##### ACTUAL RESULTS
Get error message '"zookeeper process not presently configured with monit' which is not accurate
| main | monit has new output format for summary which breaks the module issue type bug report component name monit ansible version ansible os environment linux debian summary monit has a new output format incompatible with the current module steps to reproduce install monit configure any process ansible localhost m monit a name zookeeper state restarted get error message zookeeper process not presently configured with monit which is not accurate expected results should restart the process actual results get error message zookeeper process not presently configured with monit which is not accurate | 1 |
9,196 | 24,198,693,084 | IssuesEvent | 2022-09-24 08:30:09 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | Kernel page fault while loading zfs module | Type: Architecture Type: Defect Status: Stale |
### System information
Distribution Name | Deepin
Distribution Version | 15.5 SP2
Linux Kernel | 3.10.84-23.fc21
Architecture | mips64el
ZFS Version | 2.0.0-0
SPL Version | 2.0.0-0
### Describe the problem you're observing
Kernel page fault while loading zfs module.
This issue does not exist in zfs 0.8.5. Everything works well from zfs 0.8.3 to 0.8.5.
### Describe how to reproduce the problem
```bash
> sudo insmod zfs/zfs/zfs.ko
Segmentation fault (core dumped)
```
### Include any warning/errors/backtraces from the system logs
/var/log/kern.log:
```
[ 798.425781] zavl: module license 'CDDL' taints kernel.
[ 798.425781] Disabling lock debugging due to kernel taint
[ 855.035156] ------------[ cut here ]------------
[ 855.035156] WARNING: CPU: 2 PID: 19590 at lib/scatterlist.c:287 __sg_alloc_table+0x174/0x188
[ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P O ------------ 3.10.0+ #1
[ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448
[ 855.035156] 0000000000004c86 0000000000000002 0000000000000004 ffffffffc10a0a20
[ 855.035156] 0000000000000000 ffffffff80a9de4c 9800000275803a68 0000000000000001
[ 855.035156] ffffffff80279dd8 ffffffff802767a8 0000000000000000 ffffffff8027a960
[ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000265
[ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 855.035156] 0000000000000000 98000002758039c0 0000000000000000 ffffffff80276a2c
[ 855.035156] 0000000000000000 ffffffff80d4d200 ffffffff80599f54 0000000000000000
[ 855.035156] 0000000000000000 ffffffff8021a0e8 ffffffff80599f54 ffffffff80276a2c
[ 855.035156] ...
[ 855.035156] Call Trace:
[ 855.035156] [<ffffffff8021a0e8>] show_stack+0x68/0x80
[ 855.035156] [<ffffffff80276a2c>] __warn+0xf4/0x108
[ 855.035156] [<ffffffff80599f54>] __sg_alloc_table+0x174/0x188
[ 855.035156] [<ffffffff80599f8c>] sg_alloc_table+0x24/0x60
[ 855.035156] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 855.035156] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 855.035156] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 855.035156] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 855.035156] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 855.035156] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 855.035156] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 855.035156] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 855.035156] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 855.035156]
[ 855.035156] ---[ end trace fc863b931c75040c ]---
[ 855.035156] BUG: Bad page state in process insmod pfn:00468
[ 855.035156] page:980000027f709a00 count:0 mapcount:0 mapping: (null) index:0x0
[ 855.035156] page flags: 0xfff000400(reserved)
[ 855.035156] page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
[ 855.035156] bad because of flags:
[ 855.035156] page flags: 0x400(reserved)
[ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P W O ------------ 3.10.0+ #1
[ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448
[ 855.035156] 0000000000004c86 0000000000000002 ffffffff80d20000 ffffffffffffffff
[ 855.035156] fffffff000ffffff ffffffff80a9de4c 98000002758039e8 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff802767a8 980000027f709a00 ffffffff8027a960
[ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000304
[ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 855.035156] 0000000000000000 9800000275803940 0000000000000000 ffffffff8035db18
[ 855.035156] 0000000000000000 ffffffff81810000 ffffffff80e30000 980000027f709a00
[ 855.035156] ffffffff81810000 ffffffff8021a0e8 ffffffff80e30000 ffffffff8035db18
[ 855.035156] ...
[ 855.039062] Call Trace:
[ 855.039062] [<ffffffff8021a0e8>] show_stack+0x68/0x80
[ 855.039062] [<ffffffff8035db18>] bad_page+0xf0/0x140
[ 855.039062] [<ffffffff8035dcb4>] free_pages_prepare+0x14c/0x1f8
[ 855.039062] [<ffffffff8036059c>] free_hot_cold_page+0x3c/0x208
[ 855.039062] [<ffffffff80599828>] __sg_free_table+0x88/0xb0
[ 855.039062] [<ffffffff80599fb8>] sg_alloc_table+0x50/0x60
[ 855.039062] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 855.039062] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 855.039062] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 855.039062] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 855.039062] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 855.039062] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 855.039062] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 855.039062] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 855.039062] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 855.039062]
[ 855.039062] CPU 2 Unable to handle kernel paging request at virtual address 0000000000003fe0, epc == ffffffff80599808, ra == ffffffff80599828
[ 855.058593] Oops[#1]:
[ 855.082031] CPU: 2 PID: 19590 Comm: insmod Tainted: P B W O ------------ 3.10.0+ #1
[ 855.101562] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.125000] task: 9800000275836e00 ti: 9800000275800000 task.ti: 9800000275800000
[ 855.148437] $ 0 : 0000000000000000 0000000000000001 0000000000000000 0000000000000001
[ 855.171875] $ 4 : 0000000000000000 fffffffffffffe00 0000000000000000 0000000000000001
[ 855.195312] $ 8 : 0000000000000002 ffffffff8066e5b8 00000000000002f3 0000000000000005
[ 855.214843] $12 : 0000000000000000 ffffffff817e0000 0000000000000000 ffffffff817e0000
[ 855.238281] $16 : 0000000000000000 0000000000000200 ffffffff80599960 9800000275803bd0
[ 855.261718] $20 : 0000000000003fe0 00000000000001ff fffffffffffffffc ffffffffc10a0a20
[ 855.285156] $24 : 0000000000010020 ffffffff817bd560
[ 855.304687] $28 : 9800000275800000 9800000275803b60 0000000000000000 ffffffff80599828
[ 855.328125] Hi : 0000000000000f42
[ 855.351562] Lo : 000000003333703c
[ 855.371093] epc : ffffffff80599808 __sg_free_table+0x68/0xb0
[ 855.394531] Tainted: P B W O ------------
[ 855.414062] ra : ffffffff80599828 __sg_free_table+0x88/0xb0
[ 855.437500] Status: d400cce3 KX SX UX KERNEL EXL IE
[ 855.457031] Cause : 10000008
[ 855.480468] BadVA : 0000000000003fe0
[ 855.503906] PrId : 00146309 (ICT Loongson-3)
[ 855.523437] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.628906] Process insmod (pid: 19590, threadinfo=9800000275800000, task=9800000275836e00, tls=000000fff16f7b20)
[ 855.656250] Stack : 9800000275803bd0 ffffffffc1220000 ffffffff80599f68 ffffffff80aa86d0
[ 855.656250] ffffffffc109eda8 ffffffffc1220000 0000000000000004 ffffffff80599fb8
[ 855.656250] ffffffffffffffea ffffffffc1220000 980000027d853940 ffffffffc1015898
[ 855.656250] 0000000000000000 ffffffffc1083708 ffffffffc11a0000 fffffe00c11e6f80
[ 855.656250] ffffffffc0d80258 ffffffff802a4530 ffffffffc11948e8 0000000000000003
[ 855.656250] ffffffffc1080000 0000000000000001 98000000f0141380 ffffffffc0effd68
[ 855.656250] 0000000000000000 ffffffffc0f99470 0000000000000020 ffffffffc1049708
[ 855.656250] 0002701fff000000 ffffffffc11a0000 ffffffff80a9ddc4 ffffffffc1080000
[ 855.656250] ffffffffc1080000 ffffffffc0ff32bc ffffffff80e7a280 ffffffff80e7a280
[ 855.656250] ffffffff80e7a280 ffffffffc129003c ffffffff80e7a280 ffffffff817b0000
[ 855.656250] ...
[ 855.953125] Call Trace:
[ 855.980468] [<ffffffff80599808>] __sg_free_table+0x68/0xb0
[ 856.007812] [<ffffffff80599fb8>] sg_alloc_table+0x50/0x60
[ 856.031250] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 856.058593] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 856.085937] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 856.113281] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 856.140625] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 856.167968] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 856.195312] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 856.222656] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 856.246093] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 856.273437]
[ 856.300781]
[ 856.300781] Code: 0000102d 10600005 0000802d <d890a003> 00b51023 0220282d 02168024 14c0fff3 ae62000c
[ 856.355468] ---[ end trace fc863b931c75040d ]---
```
| 1.0 | Kernel page fault while loading zfs module -
### System information
Distribution Name | Deepin
Distribution Version | 15.5 SP2
Linux Kernel | 3.10.84-23.fc21
Architecture | mips64el
ZFS Version | 2.0.0-0
SPL Version | 2.0.0-0
### Describe the problem you're observing
Kernel page fault while loading zfs module.
This issue does not exist in zfs 0.8.5. Everything works well from zfs 0.8.3 to 0.8.5.
### Describe how to reproduce the problem
```bash
> sudo insmod zfs/zfs/zfs.ko
Segmentation fault (core dumped)
```
### Include any warning/errors/backtraces from the system logs
/var/log/kern.log:
```
[ 798.425781] zavl: module license 'CDDL' taints kernel.
[ 798.425781] Disabling lock debugging due to kernel taint
[ 855.035156] ------------[ cut here ]------------
[ 855.035156] WARNING: CPU: 2 PID: 19590 at lib/scatterlist.c:287 __sg_alloc_table+0x174/0x188
[ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P O ------------ 3.10.0+ #1
[ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448
[ 855.035156] 0000000000004c86 0000000000000002 0000000000000004 ffffffffc10a0a20
[ 855.035156] 0000000000000000 ffffffff80a9de4c 9800000275803a68 0000000000000001
[ 855.035156] ffffffff80279dd8 ffffffff802767a8 0000000000000000 ffffffff8027a960
[ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000265
[ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 855.035156] 0000000000000000 98000002758039c0 0000000000000000 ffffffff80276a2c
[ 855.035156] 0000000000000000 ffffffff80d4d200 ffffffff80599f54 0000000000000000
[ 855.035156] 0000000000000000 ffffffff8021a0e8 ffffffff80599f54 ffffffff80276a2c
[ 855.035156] ...
[ 855.035156] Call Trace:
[ 855.035156] [<ffffffff8021a0e8>] show_stack+0x68/0x80
[ 855.035156] [<ffffffff80276a2c>] __warn+0xf4/0x108
[ 855.035156] [<ffffffff80599f54>] __sg_alloc_table+0x174/0x188
[ 855.035156] [<ffffffff80599f8c>] sg_alloc_table+0x24/0x60
[ 855.035156] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 855.035156] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 855.035156] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 855.035156] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 855.035156] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 855.035156] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 855.035156] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 855.035156] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 855.035156] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 855.035156]
[ 855.035156] ---[ end trace fc863b931c75040c ]---
[ 855.035156] BUG: Bad page state in process insmod pfn:00468
[ 855.035156] page:980000027f709a00 count:0 mapcount:0 mapping: (null) index:0x0
[ 855.035156] page flags: 0xfff000400(reserved)
[ 855.035156] page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
[ 855.035156] bad because of flags:
[ 855.035156] page flags: 0x400(reserved)
[ 855.035156] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.035156] CPU: 2 PID: 19590 Comm: insmod Tainted: P W O ------------ 3.10.0+ #1
[ 855.035156] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.035156] Stack : 0000000000000000 0000000000000000 ffffffff817c0000 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff80e7a58b ffffffff80d10488 ffffffff817bc448
[ 855.035156] 0000000000004c86 0000000000000002 ffffffff80d20000 ffffffffffffffff
[ 855.035156] fffffff000ffffff ffffffff80a9de4c 98000002758039e8 ffffffff817bccd8
[ 855.035156] ffffffff80279dd8 ffffffff802767a8 980000027f709a00 ffffffff8027a960
[ 855.035156] 9800000275836e00 ffffffff80d10488 ffffffff817bd0c0 0000000000000304
[ 855.035156] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 855.035156] 0000000000000000 9800000275803940 0000000000000000 ffffffff8035db18
[ 855.035156] 0000000000000000 ffffffff81810000 ffffffff80e30000 980000027f709a00
[ 855.035156] ffffffff81810000 ffffffff8021a0e8 ffffffff80e30000 ffffffff8035db18
[ 855.035156] ...
[ 855.039062] Call Trace:
[ 855.039062] [<ffffffff8021a0e8>] show_stack+0x68/0x80
[ 855.039062] [<ffffffff8035db18>] bad_page+0xf0/0x140
[ 855.039062] [<ffffffff8035dcb4>] free_pages_prepare+0x14c/0x1f8
[ 855.039062] [<ffffffff8036059c>] free_hot_cold_page+0x3c/0x208
[ 855.039062] [<ffffffff80599828>] __sg_free_table+0x88/0xb0
[ 855.039062] [<ffffffff80599fb8>] sg_alloc_table+0x50/0x60
[ 855.039062] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 855.039062] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 855.039062] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 855.039062] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 855.039062] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 855.039062] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 855.039062] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 855.039062] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 855.039062] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 855.039062]
[ 855.039062] CPU 2 Unable to handle kernel paging request at virtual address 0000000000003fe0, epc == ffffffff80599808, ra == ffffffff80599828
[ 855.058593] Oops[#1]:
[ 855.082031] CPU: 2 PID: 19590 Comm: insmod Tainted: P B W O ------------ 3.10.0+ #1
[ 855.101562] Hardware name: Loongson Loongson-3A5-780E-1w-V1.1-demo/Loongson-3A5-780E-1w-V1.1-demo, BIOS Loongson-PMON-V3.3-20170113 01/13/2017
[ 855.125000] task: 9800000275836e00 ti: 9800000275800000 task.ti: 9800000275800000
[ 855.148437] $ 0 : 0000000000000000 0000000000000001 0000000000000000 0000000000000001
[ 855.171875] $ 4 : 0000000000000000 fffffffffffffe00 0000000000000000 0000000000000001
[ 855.195312] $ 8 : 0000000000000002 ffffffff8066e5b8 00000000000002f3 0000000000000005
[ 855.214843] $12 : 0000000000000000 ffffffff817e0000 0000000000000000 ffffffff817e0000
[ 855.238281] $16 : 0000000000000000 0000000000000200 ffffffff80599960 9800000275803bd0
[ 855.261718] $20 : 0000000000003fe0 00000000000001ff fffffffffffffffc ffffffffc10a0a20
[ 855.285156] $24 : 0000000000010020 ffffffff817bd560
[ 855.304687] $28 : 9800000275800000 9800000275803b60 0000000000000000 ffffffff80599828
[ 855.328125] Hi : 0000000000000f42
[ 855.351562] Lo : 000000003333703c
[ 855.371093] epc : ffffffff80599808 __sg_free_table+0x68/0xb0
[ 855.394531] Tainted: P B W O ------------
[ 855.414062] ra : ffffffff80599828 __sg_free_table+0x88/0xb0
[ 855.437500] Status: d400cce3 KX SX UX KERNEL EXL IE
[ 855.457031] Cause : 10000008
[ 855.480468] BadVA : 0000000000003fe0
[ 855.503906] PrId : 00146309 (ICT Loongson-3)
[ 855.523437] Modules linked in: zfs(PO+) zcommon(PO) zunicode(PO) znvpair(PO) zlua(O) icp(PO) zavl(PO) zzstd(O) spl(O) zlib zlib_deflate veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xfrm4_tunnel tunnel4 ipcomp xfrm_ipcomp esp4 ah4 af_key bridge stp llc fuse uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media arc4 rtl8188ee(O) rtl_pci(O) rtlwifi(O) mac80211 joydev serio_raw sg snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cfg80211 snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core rfkill snd_pcm snd_timer shpchp sch_fq_codel binfmt_misc raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear md_mod pata_atiixp r8168(O) radeon(O)
[ 855.628906] Process insmod (pid: 19590, threadinfo=9800000275800000, task=9800000275836e00, tls=000000fff16f7b20)
[ 855.656250] Stack : 9800000275803bd0 ffffffffc1220000 ffffffff80599f68 ffffffff80aa86d0
[ 855.656250] ffffffffc109eda8 ffffffffc1220000 0000000000000004 ffffffff80599fb8
[ 855.656250] ffffffffffffffea ffffffffc1220000 980000027d853940 ffffffffc1015898
[ 855.656250] 0000000000000000 ffffffffc1083708 ffffffffc11a0000 fffffe00c11e6f80
[ 855.656250] ffffffffc0d80258 ffffffff802a4530 ffffffffc11948e8 0000000000000003
[ 855.656250] ffffffffc1080000 0000000000000001 98000000f0141380 ffffffffc0effd68
[ 855.656250] 0000000000000000 ffffffffc0f99470 0000000000000020 ffffffffc1049708
[ 855.656250] 0002701fff000000 ffffffffc11a0000 ffffffff80a9ddc4 ffffffffc1080000
[ 855.656250] ffffffffc1080000 ffffffffc0ff32bc ffffffff80e7a280 ffffffff80e7a280
[ 855.656250] ffffffff80e7a280 ffffffffc129003c ffffffff80e7a280 ffffffff817b0000
[ 855.656250] ...
[ 855.953125] Call Trace:
[ 855.980468] [<ffffffff80599808>] __sg_free_table+0x68/0xb0
[ 856.007812] [<ffffffff80599fb8>] sg_alloc_table+0x50/0x60
[ 856.031250] [<ffffffffc1015898>] abd_init+0x1f8/0x340 [zfs]
[ 856.058593] [<ffffffffc0effd68>] dmu_init+0x18/0x110 [zfs]
[ 856.085937] [<ffffffffc0f99470>] spa_init+0x190/0x2d8 [zfs]
[ 856.113281] [<ffffffffc0ff32bc>] zfs_kmod_init+0x44/0x1090 [zfs]
[ 856.140625] [<ffffffffc129003c>] _init+0x3c/0xc4 [zfs]
[ 856.167968] [<ffffffff802004b8>] do_one_initcall+0x88/0x1b0
[ 856.195312] [<ffffffff802ea1e0>] load_module+0x1e68/0x2590
[ 856.222656] [<ffffffff802eaaa4>] SyS_finit_module+0x94/0xb0
[ 856.246093] [<ffffffff802236d0>] syscall_common+0x34/0x58
[ 856.273437]
[ 856.300781]
[ 856.300781] Code: 0000102d 10600005 0000802d <d890a003> 00b51023 0220282d 02168024 14c0fff3 ae62000c
[ 856.355468] ---[ end trace fc863b931c75040d ]---
```
| non_main | kernel page fault while loading zfs module system information distribution name deepin distribution version linux kernel architecture zfs version spl version describe the problem you re observing kernel page fault while loading zfs module this issue does not exist in zfs everything works well from zfs to describe how to reproduce the problem bash sudo insmod zfs zfs zfs ko segmentation fault core dumped include any warning errors backtraces from the system logs var log kern log zavl module license cddl taints kernel disabling lock debugging due to kernel taint warning cpu pid at lib scatterlist c sg alloc table modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o cpu pid comm insmod tainted p o hardware name loongson loongson demo loongson demo bios loongson pmon stack call trace show stack warn sg alloc table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common bug bad page state in process insmod pfn page count mapcount mapping null index page flags reserved page dumped because page flags check at free flag s set bad because of flags page flags reserved modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o cpu pid comm insmod tainted p w o hardware name loongson loongson demo loongson demo bios loongson pmon stack ffffffffffffffff call trace show stack bad page free pages prepare free hot cold page sg free table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common cpu unable to handle kernel paging request at virtual address epc ra oops cpu pid comm insmod tainted p b w o hardware name loongson loongson demo loongson demo bios loongson pmon task ti task ti fffffffffffffffc hi lo epc sg free table tainted p b w o ra sg free table status kx sx ux kernel exl ie cause badva prid ict loongson modules linked in zfs po zcommon po zunicode po znvpair po zlua o icp po zavl po zzstd o spl o zlib zlib deflate veth ipt masquerade nf nat masquerade iptable nat nf conntrack nf defrag nf nat nf nat nf conntrack tunnel ipcomp xfrm ipcomp af key bridge stp llc fuse uvcvideo vmalloc memops core videodev media o rtl pci o rtlwifi o joydev serio raw sg snd hda codec realtek snd hda codec generic snd hda codec hdmi snd hda intel snd hda codec snd hwdep snd hda core rfkill snd pcm snd timer shpchp sch fq codel binfmt misc async recov async memcpy async pq async xor async tx xor pq multipath linear md mod pata atiixp o radeon o process insmod pid threadinfo task tls stack ffffffffffffffea call trace sg free table sg alloc table abd init dmu init spa init zfs kmod init init do one initcall load module sys finit module syscall common code | 0 |
3,620 | 14,630,561,247 | IssuesEvent | 2020-12-23 17:58:04 | umn-asr/sessions_data_service | https://api.github.com/repos/umn-asr/sessions_data_service | opened | Update to supported version of Rails | maintainability rails EOL sessions | We're currently running Rails 4.2 which is no longer supported. | True | Update to supported version of Rails - We're currently running Rails 4.2 which is no longer supported. | main | update to supported version of rails we re currently running rails which is no longer supported | 1 |
4,230 | 20,958,692,369 | IssuesEvent | 2022-03-27 13:25:08 | Vivelin/SMZ3Randomizer | https://api.github.com/repos/Vivelin/SMZ3Randomizer | opened | Feature toggles | maintainability | Make it possible to toggle specific features on or off.
This should let us experiment a little more freely and as a bonus, gives users the ability to turn off parts entirely (e.g. the hints and spoilers module). | True | Feature toggles - Make it possible to toggle specific features on or off.
This should let us experiment a little more freely and as a bonus, gives users the ability to turn off parts entirely (e.g. the hints and spoilers module). | main | feature toggles make it possible to toggle specific features on or off this should let us experiment a little more freely and as a bonus gives users the ability to turn off parts entirely e g the hints and spoilers module | 1 |
4,682 | 2,742,210,936 | IssuesEvent | 2015-04-21 15:26:08 | aspnet/HttpAbstractions | https://api.github.com/repos/aspnet/HttpAbstractions | opened | ApiReview: Rename Microsoft.AspNet.Builder namespace | enhancement needs design | @davidfowl Please don't just put it all into the Http namespace. | 1.0 | ApiReview: Rename Microsoft.AspNet.Builder namespace - @davidfowl Please don't just put it all into the Http namespace. | non_main | apireview rename microsoft aspnet builder namespace davidfowl please don t just put it all into the http namespace | 0 |
5,098 | 26,007,964,587 | IssuesEvent | 2022-12-20 21:26:40 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Include parameters from file in template.yaml | type/feature stage/needs-feedback area/layers stage/pm-review maintainer/need-response | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
[Bref]() provides Lambda runtimes for PHP. The problem is dealing with layer versions (we had to build [runtimes.bref.sh](https://runtimes.bref.sh/) to track them).
The idea would be to let users of the layers forget about versions. E.g.
```yaml
Resources:
SimpleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: function.php
Runtime: provided
Layers:
- !Ref PhpLayer
# instead of
#- 'arn:aws:lambda:us-east-1:209497400698:layer:php-73:1'
```
The problem: making that `PhpLayer` parameter available "automatically" to users.
### Proposal
I was trying to use `Fn::Transform` and the `Include` feature of CloudFormation, but I couldn't get it to work no matter what I tried.
I'm guessing it's conflicting with `Transform: AWS::Serverless-2016-10-31`? Is there a way we could include sub-templates in `template.yaml`?
The sub-template would define the `PhpLayer` parameter with the appropriate value, and would be managed through PHP's package manager (Composer, the PHP equivalent of npm). For example:
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
# this file defines the `PhpLayer` parameter
Location: "file://vendor/bref/parameters.yaml"
Resources:
...
``` | True | Include parameters from file in template.yaml - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
[Bref]() provides Lambda runtimes for PHP. The problem is dealing with layer versions (we had to build [runtimes.bref.sh](https://runtimes.bref.sh/) to track them).
The idea would be to let users of the layers forget about versions. E.g.
```yaml
Resources:
SimpleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: function.php
Runtime: provided
Layers:
- !Ref PhpLayer
# instead of
#- 'arn:aws:lambda:us-east-1:209497400698:layer:php-73:1'
```
The problem: making that `PhpLayer` parameter available "automatically" to users.
### Proposal
I was trying to use `Fn::Transform` and the `Include` feature of CloudFormation, but I couldn't get it to work no matter what I tried.
I'm guessing it's conflicting with `Transform: AWS::Serverless-2016-10-31`? Is there a way we could include sub-templates in `template.yaml`?
The sub-template would define the `PhpLayer` parameter with the appropriate value, and would be managed through PHP's package manager (Composer, the PHP equivalent of npm). For example:
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
# this file defines the `PhpLayer` parameter
Location: "file://vendor/bref/parameters.yaml"
Resources:
...
``` | main | include parameters from file in template yaml describe your idea feature enhancement provides lambda runtimes for php the problem is dealing with layer versions we had to build to track them the idea would be to let users of the layers forget about versions e g yaml resources simplefunction type aws serverless function properties handler function php runtime provided layers ref phplayer instead of arn aws lambda us east layer php the problem making that phplayer parameter available automatically to users proposal i was trying to use fn transform and the include feature of cloudformation but i couldn t get it to work no matter what i tried i m guessing it s conflicting with transform aws serverless is there a way we could include sub templates in template yaml the sub template would define the phplayer parameter with the appropriate value and would be managed through php s package manager composer the php equivalent of npm for example yaml awstemplateformatversion transform aws serverless fn transform name aws include parameters this file defines the phplayer parameter location file vendor bref parameters yaml resources | 1 |
301,872 | 9,232,218,392 | IssuesEvent | 2019-03-13 06:13:43 | richelbilderbeek/pirouette | https://api.github.com/repos/richelbilderbeek/pirouette | closed | pirouette article: appendix must show all figures | medium priority | I noticed that in example 3 there was a mismatch between the article and the code. To prevent this from happening, show all figures in the appendix.
Sure, these must be generated and added to the scripts in `pirouette_article`. | 1.0 | pirouette article: appendix must show all figures - I noticed that in example 3 there was a mismatch between the article and the code. To prevent this from happening, show all figures in the appendix.
Sure, these must be generated and added to the scripts in `pirouette_article`. | non_main | pirouette article appendix must show all figures i noticed that in example there was a mismatch between the article and the code to prevent this from happening show all figures in the appendix sure these must be generated and added to the scripts in pirouette article | 0 |
4,098 | 19,323,273,005 | IssuesEvent | 2021-12-14 08:42:33 | WarenGonzaga/daisy.js | https://api.github.com/repos/WarenGonzaga/daisy.js | opened | maintenance misc updates | chore maintainers only todo tweak | I just love to put it here so I'm aware of the tasks needed to be done.
- [ ] updated readme format
- [ ] contributing guide
- [ ] security policy
- [ ] code of conduct policy | True | maintenance misc updates - I just love to put it here so I'm aware of the tasks needed to be done.
- [ ] updated readme format
- [ ] contributing guide
- [ ] security policy
- [ ] code of conduct policy | main | maintenance misc updates i just love to put it here so i m aware of the tasks needed to be done updated readme format contributing guide security policy code of conduct policy | 1 |
5,153 | 26,254,104,891 | IssuesEvent | 2023-01-05 22:18:20 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Orphan Log Group | type/bug stage/bug-repro area/api-gateway maintainer/need-followup | This is not (I'm pretty sure) related to this bug: https://github.com/aws/serverless-application-model/issues/1216
Log group created by template with:
```yaml
Type: 'AWS::Logs::LogGroup
DeletionPolicy: Delete
Properties:
RetentionInDays: 30
LogGroupName
```
The above is not a copy paste, not the real template.
The ApiGW ( 'AWS::Serverless::Api' ) ref's the log group like this:
```
AccessLogSetting: {
'DestinationArn': {
'Fn::Sub': '${WalletApiGatewayAccessLogGroup.Arn}'
},
'Format':"{'method':'$context.httpMethod','path':'$context.path','requestId':'$context.requestId','resourcePath':'$context.resourcePath','status':'$context.status','responseLatency':'$context.responseLatency','responseLength':'$context.responseLength','sourceIp':'$context.identity.sourceIp','xrayTraceId':'$context.xrayTraceId','requestTime':'$context.requestTime','gatewayStage':'$context.stage','protocol':'$context.protocol','gatewayErrorMsg':'$context.error.message','integrationErrorMsg':'$context.integration.error','integrationLatency':'$context.integrationLatency'}"
}
```
If I remove the above reference, deploy and then delete the stack, the log group is created, then deleted. If not, the stack deletes _without_ error, and the log group is left behind as an orphan. The 'DeletionPolicy: Delete' changes nothing.
If I remove the log group def and leave the reference to the log group in the apigw, the log group is not eventually created like with a lambda. The deployment fails.
I think this is a bug. If both the apigw and the log group are defined and created in/by CF then deleting the stack should delete the entire stack.
The stack is deployed by an Ubuntu 18 vm in Azure DevOps using SAM cli v1.36.0
AWS Region us-east-2
| True | Orphan Log Group - This is not (I'm pretty sure) related to this bug: https://github.com/aws/serverless-application-model/issues/1216
Log group created by template with:
```yaml
Type: 'AWS::Logs::LogGroup
DeletionPolicy: Delete
Properties:
RetentionInDays: 30
LogGroupName
```
The above is not a copy paste, not the real template.
The ApiGW ( 'AWS::Serverless::Api' ) ref's the log group like this:
```
AccessLogSetting: {
'DestinationArn': {
'Fn::Sub': '${WalletApiGatewayAccessLogGroup.Arn}'
},
'Format':"{'method':'$context.httpMethod','path':'$context.path','requestId':'$context.requestId','resourcePath':'$context.resourcePath','status':'$context.status','responseLatency':'$context.responseLatency','responseLength':'$context.responseLength','sourceIp':'$context.identity.sourceIp','xrayTraceId':'$context.xrayTraceId','requestTime':'$context.requestTime','gatewayStage':'$context.stage','protocol':'$context.protocol','gatewayErrorMsg':'$context.error.message','integrationErrorMsg':'$context.integration.error','integrationLatency':'$context.integrationLatency'}"
}
```
If I remove the above reference, deploy and then delete the stack, the log group is created, then deleted. If not, the stack deletes _without_ error, and the log group is left behind as an orphan. The 'DeletionPolicy: Delete' changes nothing.
If I remove the log group def and leave the reference to the log group in the apigw, the log group is not eventually created like with a lambda. The deployment fails.
I think this is a bug. If both the apigw and the log group are defined and created in/by CF then deleting the stack should delete the entire stack.
The stack is deployed by an Ubuntu 18 vm in Azure DevOps using SAM cli v1.36.0
AWS Region us-east-2
| main | orphan log group this is not i m pretty sure related to this bug log group created by template with yaml type aws logs loggroup deletionpolicy delete properties retentionindays loggroupname the above is not a copy paste not the real template the apigw aws serverless api ref s the log group like this accesslogsetting destinationarn fn sub walletapigatewayaccessloggroup arn format method context httpmethod path context path requestid context requestid resourcepath context resourcepath status context status responselatency context responselatency responselength context responselength sourceip context identity sourceip xraytraceid context xraytraceid requesttime context requesttime gatewaystage context stage protocol context protocol gatewayerrormsg context error message integrationerrormsg context integration error integrationlatency context integrationlatency if i remove the above reference deploy and then delete the stack the log group is created then deleted if not the stack deletes without error and the log group is left behind as an orphan the deletionpolicy delete changes nothing if i remove the log group def and leave the reference to the log group in the apigw the log group is not eventually created like with a lambda the deployment fails i think this is a bug if both the apigw and the log group are defined and created in by cf then deleting the stack should delete the entire stack the stack is deployed by an ubuntu vm in azure devops using sam cli aws region us east | 1 |
1,892 | 6,577,533,669 | IssuesEvent | 2017-09-12 01:34:46 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_vol: Add support for custom KMS keys to ec2_vol | affects_2.0 aws cloud feature_idea waiting_on_maintainer | ##### Issue Type:
- Feature Idea
##### Plugin Name:
cloud/amazon/ec2_vol.py
##### Ansible Version:
```
ansible 2.0.1.0
```
##### Ansible Configuration:
```
[defaults]
retry_files_enabled=False
host_key_checking=False
pipelining=True
```
##### Environment:
```
CentOS 7.2
python2-boto-2.39.0-1.el7.noarch.rpm (from EPEL testing)
```
##### Summary:
Currently the module `ec2_vol` (right along with most of the other modules concerned with encryption on AWS) doesn't support specifying a custom encryption key. One is only able to specify whether the volume is encrypted or not and thus the default encryption key (in this case here for EBS) is used, when encryption is enabled.
So this request is about adding sort of a quick fix to the `ec2_vol` module by adding a new parameter, but I guess for the future a more global review of the topic KMS might make sense.
| True | ec2_vol: Add support for custom KMS keys to ec2_vol - ##### Issue Type:
- Feature Idea
##### Plugin Name:
cloud/amazon/ec2_vol.py
##### Ansible Version:
```
ansible 2.0.1.0
```
##### Ansible Configuration:
```
[defaults]
retry_files_enabled=False
host_key_checking=False
pipelining=True
```
##### Environment:
```
CentOS 7.2
python2-boto-2.39.0-1.el7.noarch.rpm (from EPEL testing)
```
##### Summary:
Currently the module `ec2_vol` (right along with most of the other modules concerned with encryption on AWS) doesn't support specifying a custom encryption key. One is only able to specify whether the volume is encrypted or not and thus the default encryption key (in this case here for EBS) is used, when encryption is enabled.
So this request is about adding sort of a quick fix to the `ec2_vol` module by adding a new parameter, but I guess for the future a more global review of the topic KMS might make sense.
| main | vol add support for custom kms keys to vol issue type feature idea plugin name cloud amazon vol py ansible version ansible ansible configuration retry files enabled false host key checking false pipelining true environment centos boto noarch rpm from epel testing summary currently the module vol right along with most of the other modules concerned with encryption on aws doesn t support specifying a custom encryption key one is only able to specify whether the volume is encrypted or not and thus the default encryption key in this case here for ebs is used when encryption is enabled so this request is about adding sort of a quick fix to the vol module by adding a new parameter but i guess for the future a more global review of the topic kms might make sense | 1 |
3,253 | 12,402,316,253 | IssuesEvent | 2020-05-21 11:43:29 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | ppx_custom_printf not properly constrained | Stale needs maintainer action | I'm struggling to figure out the source of [these errors in my Travis build](https://travis-ci.org/hammerlab/prohlatype/jobs/318309489):
```
#=== ERROR while installing ppx_custom_printf.v0.9.0 ==========================#
# opam-version 1.2.2
# os linux
# command jbuilder build --only-packages ppx_custom_printf --root . -j 4 @install
# path /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0
# compiler 4.05.0
# exit-code 1
# env-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.env
# stdout-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.out
# stderr-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.err
### stderr ###
# [...]
# ocamldep src/ppx_custom_printf.dependsi.ocamldep-output
# ppx src/ppx_custom_printf.pp.ml
# ocamlc src/ppx_custom_printf.{cmi,cmti}
# ppx src/format_lifter.pp.ml
# ocamldep src/ppx_custom_printf.depends.ocamldep-output
# ocamlc src/ppx_custom_printf__Format_lifter.{cmi,cmo,cmt} (exit 2)
# (cd _build/default && /home/travis/.opam/4.05.0/bin/ocamlc.opt -w -40 -w -3 -g -bin-annot -I /home/travis/.opam/4.05.0/lib/base -I /home/travis/.opam/4.05.0/lib/base/caml -I /home/travis/.opam/4.05.0/lib/base/shadow_stdlib -I /home/travis/.opam/4.05.0/lib/ocaml-compiler-libs/common -I /home/travis/.opam/4.05.0/lib/ocaml-compiler-libs/shadow -I /home/travis/.opam/4.05.0/lib/ocaml-migrate-parsetree -I /home/travis/.opam/4.05.0/lib/ocaml/compiler-libs -I /home/travis/.opam/4.05.0/lib/ppx_ast -I /home/travis/.opam/4.05.0/lib/ppx_core -I /home/travis/.opam/4.05.0/lib/ppx_deriving -I /home/travis/.opam/4.05.0/lib/ppx_driver -I /home/travis/.opam/4.05.0/lib/ppx_driver/print_diff -I /home/travis/.opam/4.05.0/lib/ppx_metaquot/lifters -I /home/travis/.opam/4.05.0/lib/ppx_optcomp -I /home/travis/.opam/4.05.0/lib/ppx_sexp_conv/expander -I /home/travis/.opam/4.05.0/lib/ppx_traverse_builtins -I /home/travis/.opam/4.05.0/lib/result -I /home/travis/.opam/4.05.0/lib/sexplib/0 -I /home/travis/.opam/4.05.0/lib/stdio -no-alias-deps -I src -open Ppx_custom_printf__ -o src/ppx_custom_printf__Format_lifter.cmo -c -impl src/format_lifter.pp.ml)
# File "src/format_lifter.ml", line 1, characters 0-9585:
# Error: Multiple definition of the type name lift.
# Names must be unique in a given structure or signature.
```
But the dependency chain is biocaml.0.8.0 -> core_kernel.v0.9.0.
OCaml: 4.05.0 | True | ppx_custom_printf not properly constrained - I'm struggling to figure out the source of [these errors in my Travis build](https://travis-ci.org/hammerlab/prohlatype/jobs/318309489):
```
#=== ERROR while installing ppx_custom_printf.v0.9.0 ==========================#
# opam-version 1.2.2
# os linux
# command jbuilder build --only-packages ppx_custom_printf --root . -j 4 @install
# path /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0
# compiler 4.05.0
# exit-code 1
# env-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.env
# stdout-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.out
# stderr-file /home/travis/.opam/4.05.0/build/ppx_custom_printf.v0.9.0/ppx_custom_printf-12929-4ee787.err
### stderr ###
# [...]
# ocamldep src/ppx_custom_printf.dependsi.ocamldep-output
# ppx src/ppx_custom_printf.pp.ml
# ocamlc src/ppx_custom_printf.{cmi,cmti}
# ppx src/format_lifter.pp.ml
# ocamldep src/ppx_custom_printf.depends.ocamldep-output
# ocamlc src/ppx_custom_printf__Format_lifter.{cmi,cmo,cmt} (exit 2)
# (cd _build/default && /home/travis/.opam/4.05.0/bin/ocamlc.opt -w -40 -w -3 -g -bin-annot -I /home/travis/.opam/4.05.0/lib/base -I /home/travis/.opam/4.05.0/lib/base/caml -I /home/travis/.opam/4.05.0/lib/base/shadow_stdlib -I /home/travis/.opam/4.05.0/lib/ocaml-compiler-libs/common -I /home/travis/.opam/4.05.0/lib/ocaml-compiler-libs/shadow -I /home/travis/.opam/4.05.0/lib/ocaml-migrate-parsetree -I /home/travis/.opam/4.05.0/lib/ocaml/compiler-libs -I /home/travis/.opam/4.05.0/lib/ppx_ast -I /home/travis/.opam/4.05.0/lib/ppx_core -I /home/travis/.opam/4.05.0/lib/ppx_deriving -I /home/travis/.opam/4.05.0/lib/ppx_driver -I /home/travis/.opam/4.05.0/lib/ppx_driver/print_diff -I /home/travis/.opam/4.05.0/lib/ppx_metaquot/lifters -I /home/travis/.opam/4.05.0/lib/ppx_optcomp -I /home/travis/.opam/4.05.0/lib/ppx_sexp_conv/expander -I /home/travis/.opam/4.05.0/lib/ppx_traverse_builtins -I /home/travis/.opam/4.05.0/lib/result -I /home/travis/.opam/4.05.0/lib/sexplib/0 -I /home/travis/.opam/4.05.0/lib/stdio -no-alias-deps -I src -open Ppx_custom_printf__ -o src/ppx_custom_printf__Format_lifter.cmo -c -impl src/format_lifter.pp.ml)
# File "src/format_lifter.ml", line 1, characters 0-9585:
# Error: Multiple definition of the type name lift.
# Names must be unique in a given structure or signature.
```
But the dependency chain is biocaml.0.8.0 -> core_kernel.v0.9.0.
OCaml: 4.05.0 | main | ppx custom printf not properly constrained i m struggling to figure out the source of error while installing ppx custom printf opam version os linux command jbuilder build only packages ppx custom printf root j install path home travis opam build ppx custom printf compiler exit code env file home travis opam build ppx custom printf ppx custom printf env stdout file home travis opam build ppx custom printf ppx custom printf out stderr file home travis opam build ppx custom printf ppx custom printf err stderr ocamldep src ppx custom printf dependsi ocamldep output ppx src ppx custom printf pp ml ocamlc src ppx custom printf cmi cmti ppx src format lifter pp ml ocamldep src ppx custom printf depends ocamldep output ocamlc src ppx custom printf format lifter cmi cmo cmt exit cd build default home travis opam bin ocamlc opt w w g bin annot i home travis opam lib base i home travis opam lib base caml i home travis opam lib base shadow stdlib i home travis opam lib ocaml compiler libs common i home travis opam lib ocaml compiler libs shadow i home travis opam lib ocaml migrate parsetree i home travis opam lib ocaml compiler libs i home travis opam lib ppx ast i home travis opam lib ppx core i home travis opam lib ppx deriving i home travis opam lib ppx driver i home travis opam lib ppx driver print diff i home travis opam lib ppx metaquot lifters i home travis opam lib ppx optcomp i home travis opam lib ppx sexp conv expander i home travis opam lib ppx traverse builtins i home travis opam lib result i home travis opam lib sexplib i home travis opam lib stdio no alias deps i src open ppx custom printf o src ppx custom printf format lifter cmo c impl src format lifter pp ml file src format lifter ml line characters error multiple definition of the type name lift names must be unique in a given structure or signature but the dependency chain is biocaml core kernel ocaml | 1 |
2,497 | 8,655,458,056 | IssuesEvent | 2018-11-27 16:00:19 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | PSVita disconnects once when trying to browse remote multimedia files. | bug unmaintained | It seems that something changed on the last FW versions so this disconnection only happens on qcma (works fine on official CMA).
I have checked wireshark logs and tried to make byte-per-byte equal connection as the official one but it still disconnects after the `PTP_EC_VITA_RequestGetSettingInfo` event finishes.
Looks like a timing issue to me but cannot find the root cause yet. This bug blocks the 0.3.10 release and the possible official debian package.
| True | PSVita disconnects once when trying to browse remote multimedia files. - It seems that something changed on the last FW versions so this disconnection only happens on qcma (works fine on official CMA).
I have checked wireshark logs and tried to make byte-per-byte equal connection as the official one but it still disconnects after the `PTP_EC_VITA_RequestGetSettingInfo` event finishes.
Looks like a timing issue to me but cannot find the root cause yet. This bug blocks the 0.3.10 release and the possible official debian package.
| main | psvita disconnects once when trying to browse remote multimedia files it seems that something changed on the last fw versions so this disconnection only happens on qcma works fine on official cma i have checked wireshark logs and tried to make byte per byte equal connection as the official one but it still disconnects after the ptp ec vita requestgetsettinginfo event finishes looks like a timing issue to me but cannot find the root cause yet this bug blocks the release and the possible official debian package | 1 |
5,718 | 30,220,958,076 | IssuesEvent | 2023-07-05 19:21:30 | 0ptim/JellyChat | https://api.github.com/repos/0ptim/JellyChat | opened | Remove freeze commands | docs/maintainance area:general | Because we no longer use freeze to add dependencies, but rather add them manually if needed. | True | Remove freeze commands - Because we no longer use freeze to add dependencies, but rather add them manually if needed. | main | remove freeze commands because we no longer use freeze to add dependencies but rather add them manually if needed | 1 |
62,745 | 12,238,232,355 | IssuesEvent | 2020-05-04 19:26:23 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [Issue] Legacy Carrier Missing | Code | Unsure if this was reported already, couldn't find anything like it. For some reason the legacycarrier seems to be missing from the game and you are unable to spawn it in and can't see it in the charactereditor. Obviously inconsistent with the legacy treatment of every other monster. | 1.0 | [Issue] Legacy Carrier Missing - Unsure if this was reported already, couldn't find anything like it. For some reason the legacycarrier seems to be missing from the game and you are unable to spawn it in and can't see it in the charactereditor. Obviously inconsistent with the legacy treatment of every other monster. | non_main | legacy carrier missing unsure if this was reported already couldn t find anything like it for some reason the legacycarrier seems to be missing from the game and you are unable to spawn it in and can t see it in the charactereditor obviously inconsistent with the legacy treatment of every other monster | 0 |
98,687 | 20,779,737,186 | IssuesEvent | 2022-03-16 13:48:39 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | closed | Exception about request.session not being set raised for invalid multipart requests | qa: not needed component: code quality priority: p4 | We've seen this weird exception in Sentry sometimes:
```
CSRF_USE_SESSIONS is enabled, but request.session is not set. SessionMiddleware must appear before CsrfViewMiddleware in MIDDLEWARE.
```
But in our settings, our custom child of `SessionMiddleware`, `NoVarySessionMiddleware`, does appear
before `CsrfViewMiddleware`...
The key is that it happens when the request itself is malformed. Specifically, I've been able to reproduce it with bad multipart/form-data requests like these:
```sh
% curl https://addons-dev.allizom.org/api/v5/addons/upload/ -H 'Content-Type: multipart/form-data' -d '' -H Authorization: Session xxxxx
```
We expect the following 400 error response:
```json
["Missing \"upload\" key in multipart file data."]
```
We get a 500 instead with the aforementioned exception logged in Sentry. The reason why is that for this request, we go through our `LocaleAndAppURLMiddleware`, and it does this:
https://github.com/mozilla/addons-server/blob/129f8c1291f4c78ead877b5d97117c64b706e905/src/olympia/amo/middleware.py#L120
Which itself does this:
https://github.com/mozilla/addons-server/blob/129f8c1291f4c78ead877b5d97117c64b706e905/src/olympia/amo/urlresolvers.py#L84
And that triggers Django post data and files loading from the request, which it cannot do because the request is invalid. It wants to trigger an exception (`MultiPartParserError`) for this, but this is supposed to happen from a view (where all middlewares would be loaded already), not from inside a middleware, especially not as early in the chain as we are.
Django error handling could be smarter about this and fix this, but we could also work around this on our own: maybe we can avoid loading `request.POST` like that, at least for all requests, or maybe we can catch the problem in `Prefixer.get_language()`, or maybe rework our middlewares order to ensure `request.session` is loaded early enough to avoid this entirely.
| 1.0 | Exception about request.session not being set raised for invalid multipart requests - We've seen this weird exception in Sentry sometimes:
```
CSRF_USE_SESSIONS is enabled, but request.session is not set. SessionMiddleware must appear before CsrfViewMiddleware in MIDDLEWARE.
```
But in our settings, our custom child of `SessionMiddleware`, `NoVarySessionMiddleware`, does appear
before `CsrfViewMiddleware`...
The key is that it happens when the request itself is malformed. Specifically, I've been able to reproduce it with bad multipart/form-data requests like these:
```sh
% curl https://addons-dev.allizom.org/api/v5/addons/upload/ -H 'Content-Type: multipart/form-data' -d '' -H Authorization: Session xxxxx
```
We expect the following 400 error response:
```json
["Missing \"upload\" key in multipart file data."]
```
We get a 500 instead with the aforementioned exception logged in Sentry. The reason why is that for this request, we go through our `LocaleAndAppURLMiddleware`, and it does this:
https://github.com/mozilla/addons-server/blob/129f8c1291f4c78ead877b5d97117c64b706e905/src/olympia/amo/middleware.py#L120
Which itself does this:
https://github.com/mozilla/addons-server/blob/129f8c1291f4c78ead877b5d97117c64b706e905/src/olympia/amo/urlresolvers.py#L84
And that triggers Django post data and files loading from the request, which it cannot do because the request is invalid. It wants to trigger an exception (`MultiPartParserError`) for this, but this is supposed to happen from a view (where all middlewares would be loaded already), not from inside a middleware, especially not as early in the chain as we are.
Django error handling could be smarter about this and fix this, but we could also work around this on our own: maybe we can avoid loading `request.POST` like that, at least for all requests, or maybe we can catch the problem in `Prefixer.get_language()`, or maybe rework our middlewares order to ensure `request.session` is loaded early enough to avoid this entirely.
| non_main | exception about request session not being set raised for invalid multipart requests we ve seen this weird exception in sentry sometimes csrf use sessions is enabled but request session is not set sessionmiddleware must appear before csrfviewmiddleware in middleware but in our settings our custom child of sessionmiddleware novarysessionmiddleware does appear before csrfviewmiddleware the key is that it happens when the request itself is malformed specifically i ve been able to reproduce it with bad multipart form data requests like these sh curl h content type multipart form data d h authorization session xxxxx we expect the following error response json we get a instead with the aforementioned exception logged in sentry the reason why is that for this request we go through our localeandappurlmiddleware and it does this which itself does this and that triggers django post data and files loading from the request which it cannot do because the request is invalid it wants to trigger an exception multipartparsererror for this but this is supposed to happen from a view where all middlewares would be loaded already not from inside a middleware especially not as early in the chain as we are django error handling could be smarter about this and fix this but we could also work around this on our own maybe we can avoid loading request post like that at least for all requests or maybe we can catch the problem in prefixer get language or maybe rework our middlewares order to ensure request session is loaded early enough to avoid this entirely | 0 |
913 | 4,581,950,419 | IssuesEvent | 2016-09-19 08:24:51 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | eos_config: `.updates` not defined when using `src:` - improve docs | affects_2.2 bug_report networking waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_config
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100)
lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100)
lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
in `eos_template` when using `src:` we return `.updates` (assuming there are any)
In `eos_config` (the replacement) we only return `.updates` when the module has been called with `lines:`.
I was under the impression that `_config` should have feature parity with the older `_template` modules
This *may* be defined as a feature request, rather than a bug.
Also from looking at the code this may well apply to all `_config` modules.
##### STEPS TO REPRODUCE
```
- name: configure device with config
eos_config:
src: basic/config.j2
provider: "{{ cli }}"
register: result
- name: "XOXO debug"
debug:
msg: "{{ result }}"
```
##### EXPECTED RESULTS
`.updates` to be returned when there are changes
##### ACTUAL RESULTS
```
ok: [veos01] => {
"msg": {
"changed": true,
"diff": {
"prepared": "--- system:/running-config\n+++ session:/ansible_1473770349-session-config\n@@ -35,6 +35,8 @@\n shutdown\n !\n interface Ethernet5\n+ description this is a test\n+ shutdown\n !\n interface Ethernet6\n shutdown\n"
},
"warnings": []
}
}
```
| True | eos_config: `.updates` not defined when using `src:` - improve docs -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_config
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0 (eos_cmd_v_item 15cf123420) last updated 2016/09/13 12:04:55 (GMT +100)
lib/ansible/modules/core: (devel ae6992bf8c) last updated 2016/09/13 09:19:01 (GMT +100)
lib/ansible/modules/extras: (devel 1f6f3b72db) last updated 2016/09/13 09:19:10 (GMT +100)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
in `eos_template` when using `src:` we return `.updates` (assuming there are any)
In `eos_config` (the replacement) we only return `.updates` when the module has been called with `lines:`.
I was under the impression that `_config` should have feature parity with the older `_template` modules
This *may* be defined as a feature request, rather than a bug.
Also from looking at the code this may well apply to all `_config` modules.
##### STEPS TO REPRODUCE
```
- name: configure device with config
eos_config:
src: basic/config.j2
provider: "{{ cli }}"
register: result
- name: "XOXO debug"
debug:
msg: "{{ result }}"
```
##### EXPECTED RESULTS
`.updates` to be returned when there are changes
##### ACTUAL RESULTS
```
ok: [veos01] => {
"msg": {
"changed": true,
"diff": {
"prepared": "--- system:/running-config\n+++ session:/ansible_1473770349-session-config\n@@ -35,6 +35,8 @@\n shutdown\n !\n interface Ethernet5\n+ description this is a test\n+ shutdown\n !\n interface Ethernet6\n shutdown\n"
},
"warnings": []
}
}
```
| main | eos config updates not defined when using src improve docs issue type bug report component name eos config ansible version ansible version ansible eos cmd v item last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary in eos template when using src we return updates assuming there are any in eos config the replacement we only return updates when the module has been called with lines i was under the impression that config should have feature parity with the older template modules this may be defined as a feature request rather than a bug also from looking at the code this may well apply to all config modules steps to reproduce name configure device with config eos config src basic config provider cli register result name xoxo debug debug msg result expected results updates to be returned when there are changes actual results ok msg changed true diff prepared system running config n session ansible session config n n shutdown n n interface n description this is a test n shutdown n n interface n shutdown n warnings | 1 |
6,818 | 3,910,657,375 | IssuesEvent | 2016-04-20 00:05:39 | haskell/cabal | https://api.github.com/repos/haskell/cabal | closed | Package with custom Setup.hs "Encountered missing dependencies" | bug nix-local-build urgent | New report (edited by @ezyang)
Cabal 1.23 and later #2731 allow you to skip specifying dependencies which are not part of a buildable component. cabal-install was updated to take advantage of this fact.
However, when a package has a `Custom` setup script, it is possible for the Setup script to be built against an old version of Cabal, which is doesn't know to ignore non-buildable dependencies. In this case, cabal-install will pass an insufficient set of dependencies, resulting in an error like this:
```
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
```
(where these are dependencies of non-buildable components.)
A workaround is to explicitly request that all components be built. For example, if there is some flag which must be selected to make a component buildable, you should pass `--constraint="package-name +flagname"`
----
Original bug report:
Trying to build pandoc-citeproc (which has a custom Setup.hs with a couple of hooks) using the latest-packaged version from git in the HVR repository (Version: 1.23+git20160204.0.7aab356~wily) fails to find dependencies already installed in a sandbox (whether the dependencies are installed manually or via the dependency solver).
$ uname -a
Linux <hostname> 4.2.0-30-generic #35-Ubuntu SMP Fri Feb 19 13:52:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.10.3
$ cabal --version
cabal-install version 1.23.0.0
compiled using version 1.23.1.0 of the Cabal library
$ cabal sandbox init
$ cabal install pandoc-citeproc
Configuring pandoc-citeproc-0.9...
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
$ cabal sandbox hc-pkg list process
process-1.2.3.0
$ cabal sandbox hc-pkg list temporary
temporary-1.2.0.4
This was previously filed as jgm/pandoc-citeproc#216 | 1.0 | Package with custom Setup.hs "Encountered missing dependencies" - New report (edited by @ezyang)
Cabal 1.23 and later #2731 allow you to skip specifying dependencies which are not part of a buildable component. cabal-install was updated to take advantage of this fact.
However, when a package has a `Custom` setup script, it is possible for the Setup script to be built against an old version of Cabal, which is doesn't know to ignore non-buildable dependencies. In this case, cabal-install will pass an insufficient set of dependencies, resulting in an error like this:
```
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
```
(where these are dependencies of non-buildable components.)
A workaround is to explicitly request that all components be built. For example, if there is some flag which must be selected to make a component buildable, you should pass `--constraint="package-name +flagname"`
----
Original bug report:
Trying to build pandoc-citeproc (which has a custom Setup.hs with a couple of hooks) using the latest-packaged version from git in the HVR repository (Version: 1.23+git20160204.0.7aab356~wily) fails to find dependencies already installed in a sandbox (whether the dependencies are installed manually or via the dependency solver).
$ uname -a
Linux <hostname> 4.2.0-30-generic #35-Ubuntu SMP Fri Feb 19 13:52:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.10.3
$ cabal --version
cabal-install version 1.23.0.0
compiled using version 1.23.1.0 of the Cabal library
$ cabal sandbox init
$ cabal install pandoc-citeproc
Configuring pandoc-citeproc-0.9...
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
$ cabal sandbox hc-pkg list process
process-1.2.3.0
$ cabal sandbox hc-pkg list temporary
temporary-1.2.0.4
This was previously filed as jgm/pandoc-citeproc#216 | non_main | package with custom setup hs encountered missing dependencies new report edited by ezyang cabal and later allow you to skip specifying dependencies which are not part of a buildable component cabal install was updated to take advantage of this fact however when a package has a custom setup script it is possible for the setup script to be built against an old version of cabal which is doesn t know to ignore non buildable dependencies in this case cabal install will pass an insufficient set of dependencies resulting in an error like this setup at least the following dependencies are missing process any temporary where these are dependencies of non buildable components a workaround is to explicitly request that all components be built for example if there is some flag which must be selected to make a component buildable you should pass constraint package name flagname original bug report trying to build pandoc citeproc which has a custom setup hs with a couple of hooks using the latest packaged version from git in the hvr repository version wily fails to find dependencies already installed in a sandbox whether the dependencies are installed manually or via the dependency solver uname a linux generic ubuntu smp fri feb utc gnu linux ghc version the glorious glasgow haskell compilation system version cabal version cabal install version compiled using version of the cabal library cabal sandbox init cabal install pandoc citeproc configuring pandoc citeproc setup at least the following dependencies are missing process any temporary cabal sandbox hc pkg list process process cabal sandbox hc pkg list temporary temporary this was previously filed as jgm pandoc citeproc | 0 |
5,136 | 26,195,481,891 | IssuesEvent | 2023-01-03 13:05:17 | precice/precice | https://api.github.com/repos/precice/precice | opened | Can we assume readData is also receiveData? | bug question maintainability | In #1526 I stumbled over a [configuration file](https://github.com/precice/precice/blob/2552a91041bd93aec0b2eb9d5da62287367ef7a0/tests/serial/watch-integral/WatchIntegralScaleAndNoScale.xml) for one of tests. The following part about the configuration looks odd: `DataTwo` is `defined as `read-data` for `SolverTwo`, but there is no mapping or exchange defined for `DataTwo`. This looks to me like an invalid configuration that preCICE should be able to detect. From my perspective every `read-data` of a participant should appear in a corresponding `exchange` and therefore be part of the coupling scheme's receive data.
Main questions:
* Is the situation descibed above a valid use-case? I cannot imagine one.
* If we decide that the configuration above is invalid, how can we check it? What's the corrected version of the test in #1526
Additional context:
* Assuming that every readData is part of receiveData makes the implementation simpler and removes some edge cases in #1523. | True | Can we assume readData is also receiveData? - In #1526 I stumbled over a [configuration file](https://github.com/precice/precice/blob/2552a91041bd93aec0b2eb9d5da62287367ef7a0/tests/serial/watch-integral/WatchIntegralScaleAndNoScale.xml) for one of tests. The following part about the configuration looks odd: `DataTwo` is `defined as `read-data` for `SolverTwo`, but there is no mapping or exchange defined for `DataTwo`. This looks to me like an invalid configuration that preCICE should be able to detect. From my perspective every `read-data` of a participant should appear in a corresponding `exchange` and therefore be part of the coupling scheme's receive data.
Main questions:
* Is the situation descibed above a valid use-case? I cannot imagine one.
* If we decide that the configuration above is invalid, how can we check it? What's the corrected version of the test in #1526
Additional context:
* Assuming that every readData is part of receiveData makes the implementation simpler and removes some edge cases in #1523. | main | can we assume readdata is also receivedata in i stumbled over a for one of tests the following part about the configuration looks odd datatwo is defined as read data for solvertwo but there is no mapping or exchange defined for datatwo this looks to me like an invalid configuration that precice should be able to detect from my perspective every read data of a participant should appear in a corresponding exchange and therefore be part of the coupling scheme s receive data main questions is the situation descibed above a valid use case i cannot imagine one if we decide that the configuration above is invalid how can we check it what s the corrected version of the test in additional context assuming that every readdata is part of receivedata makes the implementation simpler and removes some edge cases in | 1 |
5,361 | 26,979,441,639 | IssuesEvent | 2023-02-09 12:01:46 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: jayelless (membership_entity module) | Port in progress Maintainer application | Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Membership Entity
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/membership_entity/issues/3223098
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/jayelless/membership_entity
**OR (option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
<!-- example: @jenlampton -->
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| True | Application to join: jayelless (membership_entity module) - Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Membership Entity
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/membership_entity/issues/3223098
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/jayelless/membership_entity
**OR (option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
<!-- example: @jenlampton -->
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| main | application to join jayelless membership entity module hello and welcome to the contrib application process we re happy to have you please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal please provide the following information the name of your module theme or layout membership entity optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project post a link to your new backdrop project under your own github account option or option if you have already contributed code to backdrop core or contrib projects please provide links to pull requests or commits or option if you do not intend to contribute code but would like to update documentation manage issue queues etc please tag an existing contrib group member so they can post their recommendation if you have chosen option or above do you agree to the yes | 1 |
540 | 3,955,099,266 | IssuesEvent | 2016-04-29 19:32:29 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Conversions: including degree symbol in query returns no results | Low-Hanging Fruit Maintainer Approved | A DDG user reported that including the "°" symbol in the query will not trigger the conversions IA.
example: "90°C in f" does not work, but "90c in f" does.
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | True | Conversions: including degree symbol in query returns no results - A DDG user reported that including the "°" symbol in the query will not trigger the conversions IA.
example: "90°C in f" does not work, but "90c in f" does.
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | main | conversions including degree symbol in query returns no results a ddg user reported that including the ° symbol in the query will not trigger the conversions ia example °c in f does not work but in f does ia page mintsoft | 1 |
804,820 | 29,502,968,505 | IssuesEvent | 2023-06-03 01:32:15 | okTurtles/group-income | https://api.github.com/repos/okTurtles/group-income | closed | postpublish should be given message actually published | App:Frontend Level:Starter Priority:High Kind:Core | ### Problem
Currently message publishing looks like this:
```js
hooks && hooks.prepublish && hooks.prepublish(message)
await sbp('chelonia/private/out/publishEvent', message, publishOptions)
hooks && hooks.postpublish && hooks.postpublish(message)
```
Note that `postpublish` is given the same `message` given to `prepublish`, even if the message that was actually sent was a different message (because `publishEvent` had to recreate and resend the message).
### Solution
1. On success, `publishEvent` should return the actual message that got sent
2. That message should be what is passed in to `hooks.postpublish` and also returned from all of the `'chelonia/out/*'` functions
3. Make sure to update `'backend.tests/postEntry'` to remove `should(res).equal(entry.hash())` | 1.0 | postpublish should be given message actually published - ### Problem
Currently message publishing looks like this:
```js
hooks && hooks.prepublish && hooks.prepublish(message)
await sbp('chelonia/private/out/publishEvent', message, publishOptions)
hooks && hooks.postpublish && hooks.postpublish(message)
```
Note that `postpublish` is given the same `message` given to `prepublish`, even if the message that was actually sent was a different message (because `publishEvent` had to recreate and resend the message).
### Solution
1. On success, `publishEvent` should return the actual message that got sent
2. That message should be what is passed in to `hooks.postpublish` and also returned from all of the `'chelonia/out/*'` functions
3. Make sure to update `'backend.tests/postEntry'` to remove `should(res).equal(entry.hash())` | non_main | postpublish should be given message actually published problem currently message publishing looks like this js hooks hooks prepublish hooks prepublish message await sbp chelonia private out publishevent message publishoptions hooks hooks postpublish hooks postpublish message note that postpublish is given the same message given to prepublish even if the message that was actually sent was a different message because publishevent had to recreate and resend the message solution on success publishevent should return the actual message that got sent that message should be what is passed in to hooks postpublish and also returned from all of the chelonia out functions make sure to update backend tests postentry to remove should res equal entry hash | 0 |
4,380 | 22,291,103,265 | IssuesEvent | 2022-06-12 11:28:02 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | FontAwesome Icons[QUESTION] <title> | 🤷♂️ Question 👤 Awaiting Maintainer Response | ### Question
I am trying to use free "font awesome" icons but are not rendering. For example, "fa-solid fa-font-awesome"
Categories like fa-solid, fa-brands etc are not rendering but fas, fab etc are working.
### Category
Using Icons
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | True | FontAwesome Icons[QUESTION] <title> - ### Question
I am trying to use free "font awesome" icons but are not rendering. For example, "fa-solid fa-font-awesome"
Categories like fa-solid, fa-brands etc are not rendering but fas, fab etc are working.
### Category
Using Icons
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | main | fontawesome icons question i am trying to use free font awesome icons but are not rendering for example fa solid fa font awesome categories like fa solid fa brands etc are not rendering but fas fab etc are working category using icons please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the | 1 |
110,492 | 4,427,630,226 | IssuesEvent | 2016-08-16 22:06:01 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Harvest: Deleting a client can be problematic, needs more feedback. | Component: Harvesting Component: UX & UI in progress Priority 2: Moderate Status: QA Type: Bug | Harvest client delete button is always available. This can cause some problems since the harvest status is not automatically updated and the client may still be finishing up even after the harvest stats are posted. The result is nothing happens in the UI. Nada. Exceptions in logs though. Does work if you wait 5-10 mins more.
Another related issue is when deleting large but not running clients. When you click delete there is no feedback so you click a couple more times, etc. In the logs, each click results in a delete request. Once the first finally succeeds, the rest throw (harmless) errors. The user does not know the delete is in progress though. | 1.0 | Harvest: Deleting a client can be problematic, needs more feedback. - Harvest client delete button is always available. This can cause some problems since the harvest status is not automatically updated and the client may still be finishing up even after the harvest stats are posted. The result is nothing happens in the UI. Nada. Exceptions in logs though. Does work if you wait 5-10 mins more.
Another related issue is when deleting large but not running clients. When you click delete there is no feedback so you click a couple more times, etc. In the logs, each click results in a delete request. Once the first finally succeeds, the rest throw (harmless) errors. The user does not know the delete is in progress though. | non_main | harvest deleting a client can be problematic needs more feedback harvest client delete button is always available this can cause some problems since the harvest status is not automatically updated and the client may still be finishing up even after the harvest stats are posted the result is nothing happens in the ui nada exceptions in logs though does work if you wait mins more another related issue is when deleting large but not running clients when you click delete there is no feedback so you click a couple more times etc in the logs each click results in a delete request once the first finally succeeds the rest throw harmless errors the user does not know the delete is in progress though | 0 |
256,625 | 8,128,174,581 | IssuesEvent | 2018-08-17 10:43:29 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | consider merging hide/show plot button into a single context sensitive button | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal | We have both a hide and a show button for plots, taking up valuable horizontal GUI real estate.
But, it seems like we can merge hide and show into a single button that says either 'hide' or 'show' depending on current selection.
This would save one button's width in the Plot GUI.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2197
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: consider merging hide/show plot button into a single context sensitive button
Assigned to:
Category:
Target version:
Author: Mark Miller
Start: 03/31/2015
Due date:
% Done: 0
Estimated time:
Created: 03/31/2015 05:34 pm
Updated: 03/31/2015 09:42 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
We have both a hide and a show button for plots, taking up valuable horizontal GUI real estate.
But, it seems like we can merge hide and show into a single button that says either 'hide' or 'show' depending on current selection.
This would save one button's width in the Plot GUI.
Comments:
| 1.0 | consider merging hide/show plot button into a single context sensitive button - We have both a hide and a show button for plots, taking up valuable horizontal GUI real estate.
But, it seems like we can merge hide and show into a single button that says either 'hide' or 'show' depending on current selection.
This would save one button's width in the Plot GUI.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2197
Status: Rejected
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: consider merging hide/show plot button into a single context sensitive button
Assigned to:
Category:
Target version:
Author: Mark Miller
Start: 03/31/2015
Due date:
% Done: 0
Estimated time:
Created: 03/31/2015 05:34 pm
Updated: 03/31/2015 09:42 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
We have both a hide and a show button for plots, taking up valuable horizontal GUI real estate.
But, it seems like we can merge hide and show into a single button that says either 'hide' or 'show' depending on current selection.
This would save one button's width in the Plot GUI.
Comments:
| non_main | consider merging hide show plot button into a single context sensitive button we have both a hide and a show button for plots taking up valuable horizontal gui real estate but it seems like we can merge hide and show into a single button that says either hide or show depending on current selection this would save one button s width in the plot gui redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status rejected project visit tracker feature priority normal subject consider merging hide show plot button into a single context sensitive button assigned to category target version author mark miller start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description we have both a hide and a show button for plots taking up valuable horizontal gui real estate but it seems like we can merge hide and show into a single button that says either hide or show depending on current selection this would save one button s width in the plot gui comments | 0 |
3,521 | 13,821,988,454 | IssuesEvent | 2020-10-13 03:53:57 | madewithml/hacktoberfest | https://api.github.com/repos/madewithml/hacktoberfest | opened | Project Maintanence | maintainence | Some more setup is needed I feel
- [ ] Adding and updating docs.
- [ ] GitHub labels in Readme
- [ ] Issue template for feedback/maintanence
- [ ] More beginner level issues
- [ ] Contributing to this repo (steps to fork and send PR)
- [ ] Update Readme
- [ ] A tutorial example
cc @ayulockin
| True | Project Maintanence - Some more setup is needed I feel
- [ ] Adding and updating docs.
- [ ] GitHub labels in Readme
- [ ] Issue template for feedback/maintanence
- [ ] More beginner level issues
- [ ] Contributing to this repo (steps to fork and send PR)
- [ ] Update Readme
- [ ] A tutorial example
cc @ayulockin
| main | project maintanence some more setup is needed i feel adding and updating docs github labels in readme issue template for feedback maintanence more beginner level issues contributing to this repo steps to fork and send pr update readme a tutorial example cc ayulockin | 1 |
308,818 | 9,457,794,525 | IssuesEvent | 2019-04-17 02:07:42 | jbadlato/Markov-Rankings | https://api.github.com/repos/jbadlato/Markov-Rankings | opened | Testing--Create Test Database during Codeship automated tests | Priority: Medium Type: Maintenance | - [ ] Connect to staging schema
- [ ] Clear schema/run baseline & patch scripts to create current database
- [ ] Verify there were no errors
- [ ] Continue testing end-to-end utilizing the database | 1.0 | Testing--Create Test Database during Codeship automated tests - - [ ] Connect to staging schema
- [ ] Clear schema/run baseline & patch scripts to create current database
- [ ] Verify there were no errors
- [ ] Continue testing end-to-end utilizing the database | non_main | testing create test database during codeship automated tests connect to staging schema clear schema run baseline patch scripts to create current database verify there were no errors continue testing end to end utilizing the database | 0 |
492,776 | 14,219,690,613 | IssuesEvent | 2020-11-17 13:37:59 | AlexsLemonade/resources-portal | https://api.github.com/repos/AlexsLemonade/resources-portal | opened | Update Abstract -> Project Abstract across the portal | Medium Priority | ### Context
Suggestions by Anna
### Problem or idea
Refer to `Abstract` as `Project Abstract` across the portal.
This will need to update the notification copy too
### Solution or next step
_You can tag others or simply leave it for further investigation, but you must propose a next step towards solving the issue._
| 1.0 | Update Abstract -> Project Abstract across the portal - ### Context
Suggestions by Anna
### Problem or idea
Refer to `Abstract` as `Project Abstract` across the portal.
This will need to update the notification copy too
### Solution or next step
_You can tag others or simply leave it for further investigation, but you must propose a next step towards solving the issue._
| non_main | update abstract project abstract across the portal context suggestions by anna problem or idea refer to abstract as project abstract across the portal this will need to update the notification copy too solution or next step you can tag others or simply leave it for further investigation but you must propose a next step towards solving the issue | 0 |
3,364 | 13,035,611,755 | IssuesEvent | 2020-07-28 10:41:55 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | Awaited statements should be followed by a blank line | Area: analyzer Area: maintainability feature | An awaited call should be followed by a blank line if the following line contains a call to something.
The reason is ease of reading.
Following should report a violation:
```c#
await DoStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
await DoStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
``` | True | Awaited statements should be followed by a blank line - An awaited call should be followed by a blank line if the following line contains a call to something.
The reason is ease of reading.
Following should report a violation:
```c#
await DoStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
await DoStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
``` | main | awaited statements should be followed by a blank line an awaited call should be followed by a blank line if the following line contains a call to something the reason is ease of reading following should report a violation c await dostuffasync var x var y something var z guid newguid while following should not report a violation c await dostuffasync var x var y something var z guid newguid | 1 |
112,590 | 11,771,826,283 | IssuesEvent | 2020-03-16 01:38:54 | Matteas-Eden/roll-for-reaction | https://api.github.com/repos/Matteas-Eden/roll-for-reaction | opened | Write a basic story for the game to follow | documentation good first issue | **User Story**
As a game designer, I'd like to have a basic story for the game to follow, so that the user feels justified in their actions playing their game.
**Acceptance Criteria**
- A simple story with a definitive end goal is written
**Notes**
- The story should be no longer than 1000 words | 1.0 | Write a basic story for the game to follow - **User Story**
As a game designer, I'd like to have a basic story for the game to follow, so that the user feels justified in their actions playing their game.
**Acceptance Criteria**
- A simple story with a definitive end goal is written
**Notes**
- The story should be no longer than 1000 words | non_main | write a basic story for the game to follow user story as a game designer i d like to have a basic story for the game to follow so that the user feels justified in their actions playing their game acceptance criteria a simple story with a definitive end goal is written notes the story should be no longer than words | 0 |
526,861 | 15,303,241,732 | IssuesEvent | 2021-02-24 15:35:28 | ingadhoc/odoo-argentina | https://api.github.com/repos/ingadhoc/odoo-argentina | closed | odoo-argentina/l10n_ar_afipws_fe/i18n/es.po | enhancement low_priority | Texto en linea 426 dice: Mo**en**das en lugar de Monedas
l10n_ar_currency_update/i18n/es.po
linea 24 TAZA es TASA
Linea 30 TAZA es TASA
l10n_ar_afipws_fe/i18n/es.po
Linea 105 es TASA
Linea 111 es TASA
l10n_ar_invoice/i18n/es.po
Linea 570 es TASA
Linea 575 es TASA
Gracias!!
| 1.0 | odoo-argentina/l10n_ar_afipws_fe/i18n/es.po - Texto en linea 426 dice: Mo**en**das en lugar de Monedas
l10n_ar_currency_update/i18n/es.po
linea 24 TAZA es TASA
Linea 30 TAZA es TASA
l10n_ar_afipws_fe/i18n/es.po
Linea 105 es TASA
Linea 111 es TASA
l10n_ar_invoice/i18n/es.po
Linea 570 es TASA
Linea 575 es TASA
Gracias!!
| non_main | odoo argentina ar afipws fe es po texto en linea dice mo en das en lugar de monedas ar currency update es po linea taza es tasa linea taza es tasa ar afipws fe es po linea es tasa linea es tasa ar invoice es po linea es tasa linea es tasa gracias | 0 |
36,163 | 9,762,241,914 | IssuesEvent | 2019-06-05 10:48:23 | mapbox/MapboxStatic.swift | https://api.github.com/repos/mapbox/MapboxStatic.swift | opened | Support Swift Package Manager | build | Add a Package.swift file so that [Swift Package Manager](https://swift.org/package-manager/) can incorporate this library into a project.
/ref mapbox/MapboxDirections.swift#234
/cc @mapbox/maps-ios @frederoni | 1.0 | Support Swift Package Manager - Add a Package.swift file so that [Swift Package Manager](https://swift.org/package-manager/) can incorporate this library into a project.
/ref mapbox/MapboxDirections.swift#234
/cc @mapbox/maps-ios @frederoni | non_main | support swift package manager add a package swift file so that can incorporate this library into a project ref mapbox mapboxdirections swift cc mapbox maps ios frederoni | 0 |
104,789 | 4,221,174,733 | IssuesEvent | 2016-07-01 03:24:30 | smartchicago/chicago-early-learning | https://api.github.com/repos/smartchicago/chicago-early-learning | opened | Staging: All sites are being displayed as both community-based and CPS-based | bug High Priority Hold for Phase 1 Launch | This is a high priority items that needs to be fixed before launch. All of the sites are being tagged as both CPS- and community-based.
Taking a quick look at the map also confirms this:
<img width="1238" alt="screen shot 2016-07-01 at 6 22 01 am" src="https://cloud.githubusercontent.com/assets/5550969/16511010/46b04f70-3f54-11e6-9a5e-1c8abb98da26.png">
Also, see the center's map info box:
<img width="361" alt="screen shot 2016-07-01 at 6 22 10 am" src="https://cloud.githubusercontent.com/assets/5550969/16511030/73313780-3f54-11e6-934a-d6c1c97e2127.png">
| 1.0 | Staging: All sites are being displayed as both community-based and CPS-based - This is a high priority items that needs to be fixed before launch. All of the sites are being tagged as both CPS- and community-based.
Taking a quick look at the map also confirms this:
<img width="1238" alt="screen shot 2016-07-01 at 6 22 01 am" src="https://cloud.githubusercontent.com/assets/5550969/16511010/46b04f70-3f54-11e6-9a5e-1c8abb98da26.png">
Also, see the center's map info box:
<img width="361" alt="screen shot 2016-07-01 at 6 22 10 am" src="https://cloud.githubusercontent.com/assets/5550969/16511030/73313780-3f54-11e6-934a-d6c1c97e2127.png">
| non_main | staging all sites are being displayed as both community based and cps based this is a high priority items that needs to be fixed before launch all of the sites are being tagged as both cps and community based taking a quick look at the map also confirms this img width alt screen shot at am src also see the center s map info box img width alt screen shot at am src | 0 |
169,959 | 6,422,219,469 | IssuesEvent | 2017-08-09 07:52:08 | kaymckelly/program-editor | https://api.github.com/repos/kaymckelly/program-editor | opened | Add menu for actions and help | feature request priority: high | We need to have menu items for the actions of "Start new program" and "Upload from websubrev". In addition we should have links to help, and an "About this editor" link. | 1.0 | Add menu for actions and help - We need to have menu items for the actions of "Start new program" and "Upload from websubrev". In addition we should have links to help, and an "About this editor" link. | non_main | add menu for actions and help we need to have menu items for the actions of start new program and upload from websubrev in addition we should have links to help and an about this editor link | 0 |
275,341 | 8,575,586,533 | IssuesEvent | 2018-11-12 17:41:53 | poanetwork/blockscout | https://api.github.com/repos/poanetwork/blockscout | opened | Blocks Validated page should load Asynchronous | enhancement priority: high team: developer | The blocks collated page should load in asynchronous, similar to the address transactions page. Here is an example of the page: https://blockscout.com/eth/mainnet/address/0xcc16e3c00dbbe76603fa833ec20a48f786dfe610/validations
| 1.0 | Blocks Validated page should load Asynchronous - The blocks collated page should load in asynchronous, similar to the address transactions page. Here is an example of the page: https://blockscout.com/eth/mainnet/address/0xcc16e3c00dbbe76603fa833ec20a48f786dfe610/validations
| non_main | blocks validated page should load asynchronous the blocks collated page should load in asynchronous similar to the address transactions page here is an example of the page | 0 |
2,046 | 6,900,059,016 | IssuesEvent | 2017-11-24 16:25:00 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | opened | obtain Windows syscall numbers offline | help wanted Maintainability OpSys-Windows | XXX: With the frequent major win10 updates, adding new tables here is getting
tedious and taking up space. Should we stop adding the win10 updates here and
give up on our table of numbers, relying on reading the wrappers (#1598 changed
DR to work purely on wrapper-obtained numbers)? We'd lose robustness vs hooks,
and clients like Dr. Memory who have to distinguish win10 versions would have to
do their own versioning. I guess we could still have DR_WINDOWS_VERSION_xx and
not have corresponding tables here. Or we could go the planned Dr. Memory route
(https://github.com/DynamoRIO/drmemory/issues/1848) and store these numbers in a separate file that is updated via a
separate standalone utility run once by the user.
Xref #1854 | True | obtain Windows syscall numbers offline - XXX: With the frequent major win10 updates, adding new tables here is getting
tedious and taking up space. Should we stop adding the win10 updates here and
give up on our table of numbers, relying on reading the wrappers (#1598 changed
DR to work purely on wrapper-obtained numbers)? We'd lose robustness vs hooks,
and clients like Dr. Memory who have to distinguish win10 versions would have to
do their own versioning. I guess we could still have DR_WINDOWS_VERSION_xx and
not have corresponding tables here. Or we could go the planned Dr. Memory route
(https://github.com/DynamoRIO/drmemory/issues/1848) and store these numbers in a separate file that is updated via a
separate standalone utility run once by the user.
Xref #1854 | main | obtain windows syscall numbers offline xxx with the frequent major updates adding new tables here is getting tedious and taking up space should we stop adding the updates here and give up on our table of numbers relying on reading the wrappers changed dr to work purely on wrapper obtained numbers we d lose robustness vs hooks and clients like dr memory who have to distinguish versions would have to do their own versioning i guess we could still have dr windows version xx and not have corresponding tables here or we could go the planned dr memory route and store these numbers in a separate file that is updated via a separate standalone utility run once by the user xref | 1 |
263,158 | 19,901,253,956 | IssuesEvent | 2022-01-25 08:11:24 | chocolatey/docs | https://api.github.com/repos/chocolatey/docs | closed | List Simple Server as a Not Supported Repository Option | documentation | Go through documentation and fix wording to list Chocolatey Server/Simple Server as not covered under the purview of the C4B support structure.
One place to change: https://docs.chocolatey.org/en-us/features/host-packages#known-hosting-options
Another efrence: https://docs.chocolatey.org/en-us/features/host-packages#known-simple-server-options | 1.0 | List Simple Server as a Not Supported Repository Option - Go through documentation and fix wording to list Chocolatey Server/Simple Server as not covered under the purview of the C4B support structure.
One place to change: https://docs.chocolatey.org/en-us/features/host-packages#known-hosting-options
Another efrence: https://docs.chocolatey.org/en-us/features/host-packages#known-simple-server-options | non_main | list simple server as a not supported repository option go through documentation and fix wording to list chocolatey server simple server as not covered under the purview of the support structure one place to change another efrence | 0 |
218,566 | 24,376,064,675 | IssuesEvent | 2022-10-04 01:04:59 | joshnewton31080/WebGoat | https://api.github.com/repos/joshnewton31080/WebGoat | opened | CVE-2022-42004 (Medium) detected in jackson-databind-2.12.4.jar | security vulnerability | ## CVE-2022-42004 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.12.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar</p>
<p>
Dependency Hierarchy:
- jjwt-0.9.1.jar (Root Library)
- :x: **jackson-databind-2.12.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p>
</p>
</details>
<p></p>
| True | CVE-2022-42004 (Medium) detected in jackson-databind-2.12.4.jar - ## CVE-2022-42004 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.12.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar</p>
<p>
Dependency Hierarchy:
- jjwt-0.9.1.jar (Root Library)
- :x: **jackson-databind-2.12.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p>
</p>
</details>
<p></p>
| non_main | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file webgoat server pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jjwt jar root library x jackson databind jar vulnerable library found in base branch develop vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind | 0 |
5,046 | 25,860,820,227 | IssuesEvent | 2022-12-13 16:48:59 | libp2p/js-libp2p-interfaces | https://api.github.com/repos/libp2p/js-libp2p-interfaces | closed | Questions about stream muxer behavior | need/triage exp/expert need/maintainer-input | Questions:
- What's the intended behavior for an initiator and recipient for `abort` and `reset`? When should each be called?
- Does `closeWrite` close the stream for writing immediately? or does it wait for the stream to be drained first before closing the write side?
-Does `closeRead` close the stream for reading immediately? or does it wait for the stream to be drained first before closing the read side?
- What happens if `closeWrite` is called while the sink is not yet drained, ie still processing a source?
Answers (?):
- If an initiator wants to trigger a stream to be reset (ended for both reading and writing because of some error), they should call `abort`.
`abort` will stop the stream from further reading/writing, but first notify the recipient. The recipient will likely then call `reset`.
`reset` will stop the stream from further reading/writing, without notifying the recipient.
- `closeWrite` notifies the recipient to no longer expect any more data and immediately closes the write side of the stream.
- `closeRead` immediately closes the read side of the stream.
- If `closeWrite` is called while the sink is still processing a source, the processing will stop after the current iteration and the sync will return normally (not throw). | True | Questions about stream muxer behavior - Questions:
- What's the intended behavior for an initiator and recipient for `abort` and `reset`? When should each be called?
- Does `closeWrite` close the stream for writing immediately? or does it wait for the stream to be drained first before closing the write side?
-Does `closeRead` close the stream for reading immediately? or does it wait for the stream to be drained first before closing the read side?
- What happens if `closeWrite` is called while the sink is not yet drained, ie still processing a source?
Answers (?):
- If an initiator wants to trigger a stream to be reset (ended for both reading and writing because of some error), they should call `abort`.
`abort` will stop the stream from further reading/writing, but first notify the recipient. The recipient will likely then call `reset`.
`reset` will stop the stream from further reading/writing, without notifying the recipient.
- `closeWrite` notifies the recipient to no longer expect any more data and immediately closes the write side of the stream.
- `closeRead` immediately closes the read side of the stream.
- If `closeWrite` is called while the sink is still processing a source, the processing will stop after the current iteration and the sync will return normally (not throw). | main | questions about stream muxer behavior questions what s the intended behavior for an initiator and recipient for abort and reset when should each be called does closewrite close the stream for writing immediately or does it wait for the stream to be drained first before closing the write side does closeread close the stream for reading immediately or does it wait for the stream to be drained first before closing the read side what happens if closewrite is called while the sink is not yet drained ie still processing a source answers if an initiator wants to trigger a stream to be reset ended for both reading and writing because of some error they should call abort abort will stop the stream from further reading writing but first notify the recipient the recipient will likely then call reset reset will stop the stream from further reading writing without notifying the recipient closewrite notifies the recipient to no longer expect any more data and immediately closes the write side of the stream closeread immediately closes the read side of the stream if closewrite is called while the sink is still processing a source the processing will stop after the current iteration and the sync will return normally not throw | 1 |
4,368 | 22,154,529,776 | IssuesEvent | 2022-06-03 20:46:18 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Update factory boy and Faker | engineering backend dependencies Maintain | factory boy/faker need to be bumped by (several) major versions, which also requires changing some of our code that relies both on apparently ancient APIs, but also on what appears to be entirely undocumented behaviour around data providers for Faker. | True | Update factory boy and Faker - factory boy/faker need to be bumped by (several) major versions, which also requires changing some of our code that relies both on apparently ancient APIs, but also on what appears to be entirely undocumented behaviour around data providers for Faker. | main | update factory boy and faker factory boy faker need to be bumped by several major versions which also requires changing some of our code that relies both on apparently ancient apis but also on what appears to be entirely undocumented behaviour around data providers for faker | 1 |
65,771 | 7,917,477,607 | IssuesEvent | 2018-07-04 09:57:50 | unee-t/frontend | https://api.github.com/repos/unee-t/frontend | closed | 'List Cases Assigned to Me' - Pilot User feedback | Feature design/ux | # The Problem:
Today a user can have several responsibility in a case:
- The reporter (I am the one who created the case)
- The Assignee (I am the one accountable to do something on this case)
- Invited (I need to know what's happening and I can see the case, interact, add comment, etc...)
Today the [default 'case' view](https://case.unee-t.com/case) shows the case where the logged in user is either:
- The reporter
- The assignee
- A user invited to this case
See the [design on Invision](https://projects.invisionapp.com/d/main#/console/12250377/299154525/preview) for more details.

As reported by several pilot users, it would be good to be able to have a view to "know about the cases that I need to take care of".
See the ["Road Book" issue](https://unee-t.slack.com/archives/C9JKCF0KX/p1529547720000147) for more details.
# Suggested solution:
## Long term solution:
We will have a workaround/fix when we implement the ['Filter case' functionality](https://projects.invisionapp.com/d/main#/console/12250377/280835805/preview).
## Short term fix:
Can we create a screen which would display only the list of cases that are assigned to me?
# Open Questions:
@kiatlim and @nbiton what do you think?
How would you tackle this? | 1.0 | 'List Cases Assigned to Me' - Pilot User feedback - # The Problem:
Today a user can have several responsibility in a case:
- The reporter (I am the one who created the case)
- The Assignee (I am the one accountable to do something on this case)
- Invited (I need to know what's happening and I can see the case, interact, add comment, etc...)
Today the [default 'case' view](https://case.unee-t.com/case) shows the case where the logged in user is either:
- The reporter
- The assignee
- A user invited to this case
See the [design on Invision](https://projects.invisionapp.com/d/main#/console/12250377/299154525/preview) for more details.

As reported by several pilot users, it would be good to be able to have a view to "know about the cases that I need to take care of".
See the ["Road Book" issue](https://unee-t.slack.com/archives/C9JKCF0KX/p1529547720000147) for more details.
# Suggested solution:
## Long term solution:
We will have a workaround/fix when we implement the ['Filter case' functionality](https://projects.invisionapp.com/d/main#/console/12250377/280835805/preview).
## Short term fix:
Can we create a screen which would display only the list of cases that are assigned to me?
# Open Questions:
@kiatlim and @nbiton what do you think?
How would you tackle this? | non_main | list cases assigned to me pilot user feedback the problem today a user can have several responsibility in a case the reporter i am the one who created the case the assignee i am the one accountable to do something on this case invited i need to know what s happening and i can see the case interact add comment etc today the shows the case where the logged in user is either the reporter the assignee a user invited to this case see the for more details as reported by several pilot users it would be good to be able to have a view to know about the cases that i need to take care of see the for more details suggested solution long term solution we will have a workaround fix when we implement the short term fix can we create a screen which would display only the list of cases that are assigned to me open questions kiatlim and nbiton what do you think how would you tackle this | 0 |
2,224 | 7,858,445,951 | IssuesEvent | 2018-06-21 13:57:29 | btc-ag/service-idl | https://api.github.com/repos/btc-ag/service-idl | closed | It should be checked if code generation produces conflicting files | area: generator-generic complexity: low consistency-check maintainability | The regular JavaIoFileSystemAccess doesn't check for overwritten files. This should be changed, e.g. by providing a wrapper that at least checks if the same file isn't written twice during the same execution of the generator. | True | It should be checked if code generation produces conflicting files - The regular JavaIoFileSystemAccess doesn't check for overwritten files. This should be changed, e.g. by providing a wrapper that at least checks if the same file isn't written twice during the same execution of the generator. | main | it should be checked if code generation produces conflicting files the regular javaiofilesystemaccess doesn t check for overwritten files this should be changed e g by providing a wrapper that at least checks if the same file isn t written twice during the same execution of the generator | 1 |
221,437 | 7,388,261,073 | IssuesEvent | 2018-03-16 01:32:18 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [User issue] Crash without dump after attempting to load webserverplugin | Medium Priority | We currently have issues with an windows 10 computer launching Eco.
Wether I'm using the Steam-Version, the direct exe from the developers website, Steam-EcoServer, or the devs Eco-server won't matter.
It will stop loading/creating a world after trying to start the webserverplugin, fonts vanish and the client will pop to the created worlds if pressing the Esc-key. Letting the client run without any fonts (while maybe running some things in the background) won't solve the issue. Copying another local world won't work as well. The world will appear in the list, load it's textures, but the client will freeze/crash once it does attempt webserverplugin.
Joining a premade server is working fine strangely enough.
.net 4 and 4.5 are up to date, drivers are checked and are working fine. Hardware is checked and working fine, requirements for the game are met (and since it's running on a premade server that's doublechecked as well).
The game is running just fine on a window 7.1 computer, had the doublecheck for that on my other computer.
Are any issues known like that, or are we the only ones with that kind of problem?
Seems like that issue appeared already [2 years](http://ecoforum.strangeloopgames.com:4567/topic/134/webserverplugin-program-stopped-working) ago. Maybe there is something now that could be done? | 1.0 | [User issue] Crash without dump after attempting to load webserverplugin - We currently have issues with an windows 10 computer launching Eco.
Wether I'm using the Steam-Version, the direct exe from the developers website, Steam-EcoServer, or the devs Eco-server won't matter.
It will stop loading/creating a world after trying to start the webserverplugin, fonts vanish and the client will pop to the created worlds if pressing the Esc-key. Letting the client run without any fonts (while maybe running some things in the background) won't solve the issue. Copying another local world won't work as well. The world will appear in the list, load it's textures, but the client will freeze/crash once it does attempt webserverplugin.
Joining a premade server is working fine strangely enough.
.net 4 and 4.5 are up to date, drivers are checked and are working fine. Hardware is checked and working fine, requirements for the game are met (and since it's running on a premade server that's doublechecked as well).
The game is running just fine on a window 7.1 computer, had the doublecheck for that on my other computer.
Are any issues known like that, or are we the only ones with that kind of problem?
Seems like that issue appeared already [2 years](http://ecoforum.strangeloopgames.com:4567/topic/134/webserverplugin-program-stopped-working) ago. Maybe there is something now that could be done? | non_main | crash without dump after attempting to load webserverplugin we currently have issues with an windows computer launching eco wether i m using the steam version the direct exe from the developers website steam ecoserver or the devs eco server won t matter it will stop loading creating a world after trying to start the webserverplugin fonts vanish and the client will pop to the created worlds if pressing the esc key letting the client run without any fonts while maybe running some things in the background won t solve the issue copying another local world won t work as well the world will appear in the list load it s textures but the client will freeze crash once it does attempt webserverplugin joining a premade server is working fine strangely enough net and are up to date drivers are checked and are working fine hardware is checked and working fine requirements for the game are met and since it s running on a premade server that s doublechecked as well the game is running just fine on a window computer had the doublecheck for that on my other computer are any issues known like that or are we the only ones with that kind of problem seems like that issue appeared already ago maybe there is something now that could be done | 0 |
226,077 | 24,937,662,466 | IssuesEvent | 2022-10-31 16:18:27 | ManageIQ/miq_bot | https://api.github.com/repos/ManageIQ/miq_bot | opened | CVE-2022-3704 (Medium) detected in actionpack-5.2.8.1.gem | security vulnerability | ## CVE-2022-3704 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-5.2.8.1.gem</b></p></summary>
<p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p>
<p>Library home page: <a href="https://rubygems.org/gems/actionpack-5.2.8.1.gem">https://rubygems.org/gems/actionpack-5.2.8.1.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/actionpack-5.2.8.1.gem</p>
<p>
Dependency Hierarchy:
- rails-5.2.8.1.gem (Root Library)
- :x: **actionpack-5.2.8.1.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/37c2faddad2f3de376140b931bef0dd3ca39e68e">37c2faddad2f3de376140b931bef0dd3ca39e68e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability classified as problematic has been found in Ruby on Rails. This affects an unknown part of the file actionpack/lib/action_dispatch/middleware/templates/routes/_table.html.erb. The manipulation leads to cross site scripting. It is possible to initiate the attack remotely. The name of the patch is be177e4566747b73ff63fd5f529fab564e475ed4. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-212319.
<p>Publish Date: 2022-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3704>CVE-2022-3704</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-3704 (Medium) detected in actionpack-5.2.8.1.gem - ## CVE-2022-3704 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-5.2.8.1.gem</b></p></summary>
<p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p>
<p>Library home page: <a href="https://rubygems.org/gems/actionpack-5.2.8.1.gem">https://rubygems.org/gems/actionpack-5.2.8.1.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/actionpack-5.2.8.1.gem</p>
<p>
Dependency Hierarchy:
- rails-5.2.8.1.gem (Root Library)
- :x: **actionpack-5.2.8.1.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/37c2faddad2f3de376140b931bef0dd3ca39e68e">37c2faddad2f3de376140b931bef0dd3ca39e68e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability classified as problematic has been found in Ruby on Rails. This affects an unknown part of the file actionpack/lib/action_dispatch/middleware/templates/routes/_table.html.erb. The manipulation leads to cross site scripting. It is possible to initiate the attack remotely. The name of the patch is be177e4566747b73ff63fd5f529fab564e475ed4. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-212319.
<p>Publish Date: 2022-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3704>CVE-2022-3704</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in actionpack gem cve medium severity vulnerability vulnerable library actionpack gem web apps on rails simple battle tested conventions for building and testing mvc web applications works with any rack compatible server library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache actionpack gem dependency hierarchy rails gem root library x actionpack gem vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability classified as problematic has been found in ruby on rails this affects an unknown part of the file actionpack lib action dispatch middleware templates routes table html erb the manipulation leads to cross site scripting it is possible to initiate the attack remotely the name of the patch is it is recommended to apply a patch to fix this issue the associated identifier of this vulnerability is vdb publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href step up your open source security game with mend | 0 |
125,560 | 10,346,332,180 | IssuesEvent | 2019-09-04 15:04:56 | kcigeospatial/Fred_Co_Land-Management | https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management | closed | Minor Permit-Hot Tub & Solar Panels-incorrect fees | Bug Ready for Test Env. Retest | The building minimum and the single visit fee both generated for minor permits. only the single visit building fee should generate-aw
#246426-hot tub

#246425-solar panel

| 2.0 | Minor Permit-Hot Tub & Solar Panels-incorrect fees - The building minimum and the single visit fee both generated for minor permits. only the single visit building fee should generate-aw
#246426-hot tub

#246425-solar panel

| non_main | minor permit hot tub solar panels incorrect fees the building minimum and the single visit fee both generated for minor permits only the single visit building fee should generate aw hot tub solar panel | 0 |
193,809 | 22,216,357,444 | IssuesEvent | 2022-06-08 02:21:45 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2015-7566 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2015-7566 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The clie_5_attach function in drivers/usb/serial/visor.c in the Linux kernel through 4.4.1 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a bulk-out endpoint.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7566>CVE-2015-7566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: v4.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-7566 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2015-7566 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The clie_5_attach function in drivers/usb/serial/visor.c in the Linux kernel through 4.4.1 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a bulk-out endpoint.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7566>CVE-2015-7566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: v4.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers usb serial visor c drivers usb serial visor c vulnerability details the clie attach function in drivers usb serial visor c in the linux kernel through allows physically proximate attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by inserting a usb device that lacks a bulk out endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
3,436 | 13,210,628,621 | IssuesEvent | 2020-08-15 17:58:12 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Refactor initialization of web UI | enhancement maintainability | Our current initialization code is messy and fragile with a bunch of synchronous code that blocks the UI and must be executed in a particular well defined order or things break. #3084 is an example of a recent symptom of this, but we've seen a number of related issues.
### Proposed solution
Any [place](https://github.com/OpenRefine/OpenRefine/search?q=async%3A+false&type=Code) that uses `async: false` in an AJAX call is suspicious and any initialization of non-visible elements should be deferred. Any required initializations should be done in parallel to the extent possible with only the necessary dependencies blocking (e.g. language loading and other initializations which don't depend on translations should happen in parallel.
Promises are one technology which could potentially be used to achieve this cleanly.
Not doing the initializations at startup also implies moving the necessary code to be done, if necessary, before a requested tab/screen/dialog gets made visible.
As an example, the Google Data importer is doing all of this during startup:
```
Refine.GDataSourceUI._listDocuments
Refine.GDataSourceUI.attachUI
Refine.CreateProjectUI.addSourceSelectionUI
Refine.GDataImportingController
Refine.CreateProjectUI._initializeUI
```
This needs to be pared back significantly to just registering the necessary hooks to be invoked later. | True | Refactor initialization of web UI - Our current initialization code is messy and fragile with a bunch of synchronous code that blocks the UI and must be executed in a particular well defined order or things break. #3084 is an example of a recent symptom of this, but we've seen a number of related issues.
### Proposed solution
Any [place](https://github.com/OpenRefine/OpenRefine/search?q=async%3A+false&type=Code) that uses `async: false` in an AJAX call is suspicious and any initialization of non-visible elements should be deferred. Any required initializations should be done in parallel to the extent possible with only the necessary dependencies blocking (e.g. language loading and other initializations which don't depend on translations should happen in parallel.
Promises are one technology which could potentially be used to achieve this cleanly.
Not doing the initializations at startup also implies moving the necessary code to be done, if necessary, before a requested tab/screen/dialog gets made visible.
As an example, the Google Data importer is doing all of this during startup:
```
Refine.GDataSourceUI._listDocuments
Refine.GDataSourceUI.attachUI
Refine.CreateProjectUI.addSourceSelectionUI
Refine.GDataImportingController
Refine.CreateProjectUI._initializeUI
```
This needs to be pared back significantly to just registering the necessary hooks to be invoked later. | main | refactor initialization of web ui our current initialization code is messy and fragile with a bunch of synchronous code that blocks the ui and must be executed in a particular well defined order or things break is an example of a recent symptom of this but we ve seen a number of related issues proposed solution any that uses async false in an ajax call is suspicious and any initialization of non visible elements should be deferred any required initializations should be done in parallel to the extent possible with only the necessary dependencies blocking e g language loading and other initializations which don t depend on translations should happen in parallel promises are one technology which could potentially be used to achieve this cleanly not doing the initializations at startup also implies moving the necessary code to be done if necessary before a requested tab screen dialog gets made visible as an example the google data importer is doing all of this during startup refine gdatasourceui listdocuments refine gdatasourceui attachui refine createprojectui addsourceselectionui refine gdataimportingcontroller refine createprojectui initializeui this needs to be pared back significantly to just registering the necessary hooks to be invoked later | 1 |
652 | 4,164,919,538 | IssuesEvent | 2016-06-19 05:08:57 | Particular/ServiceControl | https://api.github.com/repos/Particular/ServiceControl | closed | Reinstalling an instance wont work on the same data files | Tag: Maintainer Prio Type: Improvement |
I tried to reinstall an instance. I first removed the instance via the management tool. It asked if I wanted to remove the data files and selected to keep it. I then added an instance again with the same folders and got a message that it could not add it because there were already files present.
Instead of blocking it should ask the question if you are sure that you want to use those folders and reuse existing database files. | True | Reinstalling an instance wont work on the same data files -
I tried to reinstall an instance. I first removed the instance via the management tool. It asked if I wanted to remove the data files and selected to keep it. I then added an instance again with the same folders and got a message that it could not add it because there were already files present.
Instead of blocking it should ask the question if you are sure that you want to use those folders and reuse existing database files. | main | reinstalling an instance wont work on the same data files i tried to reinstall an instance i first removed the instance via the management tool it asked if i wanted to remove the data files and selected to keep it i then added an instance again with the same folders and got a message that it could not add it because there were already files present instead of blocking it should ask the question if you are sure that you want to use those folders and reuse existing database files | 1 |
261,799 | 22,773,945,354 | IssuesEvent | 2022-07-08 12:47:44 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | opened | Teste de generalizacao para a tag Informações institucionais - Link de acesso - Amparo do Serra | generalization test development | DoD: Realizar o teste de Generalização do validador da tag Informações institucionais - Link de acesso para o Município de Amparo do Serra. | 1.0 | Teste de generalizacao para a tag Informações institucionais - Link de acesso - Amparo do Serra - DoD: Realizar o teste de Generalização do validador da tag Informações institucionais - Link de acesso para o Município de Amparo do Serra. | non_main | teste de generalizacao para a tag informações institucionais link de acesso amparo do serra dod realizar o teste de generalização do validador da tag informações institucionais link de acesso para o município de amparo do serra | 0 |
204 | 2,849,682,282 | IssuesEvent | 2015-05-30 22:21:22 | jenkinsci/slack-plugin | https://api.github.com/repos/jenkinsci/slack-plugin | closed | Create a project to bootstrap jenkins with the slack plugin configured for testing | maintainer communication | To aide in testing it would be useful to have a project that pre-configures a Jenkins instance for testing the slack plugin. This would be useful for people other than myself. | True | Create a project to bootstrap jenkins with the slack plugin configured for testing - To aide in testing it would be useful to have a project that pre-configures a Jenkins instance for testing the slack plugin. This would be useful for people other than myself. | main | create a project to bootstrap jenkins with the slack plugin configured for testing to aide in testing it would be useful to have a project that pre configures a jenkins instance for testing the slack plugin this would be useful for people other than myself | 1 |
289,033 | 24,952,339,701 | IssuesEvent | 2022-11-01 08:34:56 | unbekanntes-pferd/dracoon-python-api | https://api.github.com/repos/unbekanntes-pferd/dracoon-python-api | opened | create unit tests via http mocks (respx) | testing | use respx for httpx mocking and create unit tests that do not require active http connection
create testing helper to quickly build required responses | 1.0 | create unit tests via http mocks (respx) - use respx for httpx mocking and create unit tests that do not require active http connection
create testing helper to quickly build required responses | non_main | create unit tests via http mocks respx use respx for httpx mocking and create unit tests that do not require active http connection create testing helper to quickly build required responses | 0 |
5,870 | 31,843,261,984 | IssuesEvent | 2023-09-14 17:53:35 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | opened | Newly added classes shown as "unsunced" | type: bug awaiting-maintainer | ### Description of the bug:
When I add a new file it needs to be synced even though this is not what is really required with the current Java `.ijwb` project structure.
<img width="545" alt="image" src="https://github.com/bazelbuild/intellij/assets/50216138/d64cecca-8398-43cf-91ce-4de782200c5d">
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Add a new Java class to the existing package.
### Which Intellij IDE are you using? Please provide the specific version.
2023.2.1
### What programming languages and tools are you using? Please provide specific versions.
Java
### What Bazel plugin version are you using?
2023.08.29.0.1-api-version-232
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
Here is a PR to mitigate the issue https://github.com/bazelbuild/intellij/pull/5341 | True | Newly added classes shown as "unsunced" - ### Description of the bug:
When I add a new file it needs to be synced even though this is not what is really required with the current Java `.ijwb` project structure.
<img width="545" alt="image" src="https://github.com/bazelbuild/intellij/assets/50216138/d64cecca-8398-43cf-91ce-4de782200c5d">
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Add a new Java class to the existing package.
### Which Intellij IDE are you using? Please provide the specific version.
2023.2.1
### What programming languages and tools are you using? Please provide specific versions.
Java
### What Bazel plugin version are you using?
2023.08.29.0.1-api-version-232
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
Here is a PR to mitigate the issue https://github.com/bazelbuild/intellij/pull/5341 | main | newly added classes shown as unsunced description of the bug when i add a new file it needs to be synced even though this is not what is really required with the current java ijwb project structure img width alt image src what s the simplest easiest way to reproduce this bug please provide a minimal example if possible add a new java class to the existing package which intellij ide are you using please provide the specific version what programming languages and tools are you using please provide specific versions java what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share here is a pr to mitigate the issue | 1 |
24,264 | 12,247,590,402 | IssuesEvent | 2020-05-05 16:07:07 | grafana/grafana | https://api.github.com/repos/grafana/grafana | closed | RangeError: Maximum call stack size exceeded | datasource/Prometheus type/performance | <!--
Please use this template while reporting a bug and provide as much info as possible.
Questions should be posted to https://community.grafana.com
Use query inspector to troubleshoot issues: https://community.grafana.com/t/using-grafanas-query-inspector-to-troubleshoot-issues/2630
-->
**What happened**:
An unexpected error happens when searching in /explore for a short string with a big result set like "gc". The datasource is Prometheus.
JS console shows error:
```QueryRow will unmount
react-dom.production.min.js:196 RangeError: Maximum call stack size exceeded
at typeahead.ts:11
at Array.reduce (<anonymous>)
at ie (typeahead.ts:6)
at t.n.componentDidUpdate (Typeahead.tsx:75)
at bs (react-dom.production.min.js:251)
at e.unstable_runWithPriority (scheduler.production.min.js:18)
at Hi (react-dom.production.min.js:120)
at ms (react-dom.production.min.js:244)
at os (react-dom.production.min.js:223)
at react-dom.production.min.js:121```
**What you expected to happen**:
See list of time series that match the search pattern.
**How to reproduce it (as minimally and precisely as possible)**:
I guess this depends on the data in your datasource. Try short strings like "gc".
**Anything else we need to know?**:
**Environment**:
- Grafana version: Any 6.7.x is affected, 6.6.x is fine.
- Data source type & version: Prometheus 2.17.x, VictoriaMetrics 1.34.9
- OS Grafana is installed on: Linux, Ubuntu
- User OS & Browser: Mac OS X, Chrome
- Grafana plugins:
- Others:
| True | RangeError: Maximum call stack size exceeded - <!--
Please use this template while reporting a bug and provide as much info as possible.
Questions should be posted to https://community.grafana.com
Use query inspector to troubleshoot issues: https://community.grafana.com/t/using-grafanas-query-inspector-to-troubleshoot-issues/2630
-->
**What happened**:
An unexpected error happens when searching in /explore for a short string with a big result set like "gc". The datasource is Prometheus.
JS console shows error:
```QueryRow will unmount
react-dom.production.min.js:196 RangeError: Maximum call stack size exceeded
at typeahead.ts:11
at Array.reduce (<anonymous>)
at ie (typeahead.ts:6)
at t.n.componentDidUpdate (Typeahead.tsx:75)
at bs (react-dom.production.min.js:251)
at e.unstable_runWithPriority (scheduler.production.min.js:18)
at Hi (react-dom.production.min.js:120)
at ms (react-dom.production.min.js:244)
at os (react-dom.production.min.js:223)
at react-dom.production.min.js:121```
**What you expected to happen**:
See list of time series that match the search pattern.
**How to reproduce it (as minimally and precisely as possible)**:
I guess this depends on the data in your datasource. Try short strings like "gc".
**Anything else we need to know?**:
**Environment**:
- Grafana version: Any 6.7.x is affected, 6.6.x is fine.
- Data source type & version: Prometheus 2.17.x, VictoriaMetrics 1.34.9
- OS Grafana is installed on: Linux, Ubuntu
- User OS & Browser: Mac OS X, Chrome
- Grafana plugins:
- Others:
| non_main | rangeerror maximum call stack size exceeded please use this template while reporting a bug and provide as much info as possible questions should be posted to use query inspector to troubleshoot issues what happened an unexpected error happens when searching in explore for a short string with a big result set like gc the datasource is prometheus js console shows error queryrow will unmount react dom production min js rangeerror maximum call stack size exceeded at typeahead ts at array reduce at ie typeahead ts at t n componentdidupdate typeahead tsx at bs react dom production min js at e unstable runwithpriority scheduler production min js at hi react dom production min js at ms react dom production min js at os react dom production min js at react dom production min js what you expected to happen see list of time series that match the search pattern how to reproduce it as minimally and precisely as possible i guess this depends on the data in your datasource try short strings like gc anything else we need to know environment grafana version any x is affected x is fine data source type version prometheus x victoriametrics os grafana is installed on linux ubuntu user os browser mac os x chrome grafana plugins others | 0 |
192,313 | 6,848,579,281 | IssuesEvent | 2017-11-13 19:00:44 | USGCRP/gcis | https://api.github.com/repos/USGCRP/gcis | opened | Add Images for Figures with a Single Panel | context Content Management priority high type content | For the CSSR, we only populated Images for Figures with multiple subpanels in the TSU system. We should go through and populate the images for single-panel figures. | 1.0 | Add Images for Figures with a Single Panel - For the CSSR, we only populated Images for Figures with multiple subpanels in the TSU system. We should go through and populate the images for single-panel figures. | non_main | add images for figures with a single panel for the cssr we only populated images for figures with multiple subpanels in the tsu system we should go through and populate the images for single panel figures | 0 |
5,536 | 27,704,480,152 | IssuesEvent | 2023-03-14 10:16:55 | conbench/conbench | https://api.github.com/repos/conbench/conbench | closed | UI HTML: outest pattern confuses, why flask-bootstrap, bootstrap CSS is included twice | UI/UX maintainability | We seem to make use of the rather outdated https://github.com/mbr/flask-bootstrap.
It has a base template that gets automatically used:
https://github.com/mbr/flask-bootstrap/blob/master/flask_bootstrap/templates/bootstrap/base.html
We use this Flask extension here: https://github.com/conbench/conbench/blob/fbf5a6ce898532d44327e3ac961f5abb852843bc/conbench/extensions.py#L5
What is this extension good for?
> Flask-Bootstrap packages [Bootstrap](http://getbootstrap.com/) into an extension that mostly consists of a blueprint named 'bootstrap'. It can also create links to serve Bootstrap from a CDN and works with no boilerplate code in your application.
That's ambiguous. Value unclear right now. Results in complexity that is hard to see through.
When I look at the HTML served by conbench.ursa.dev then we can see that the bootstrap CSS is included twice, from two different CDNs:
```
<!-- Bootstrap -->
<link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet">
<link rel="shortcut icon" href="/static/favicon.ico?q=1677772058">
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" type="text/css" />
```
I think the first one is included by Flask-Bootstrap, and the second one is manually included here:
https://github.com/conbench/conbench/blob/fbf5a6ce898532d44327e3ac961f5abb852843bc/conbench/templates/app.html#L10
| True | UI HTML: outest pattern confuses, why flask-bootstrap, bootstrap CSS is included twice - We seem to make use of the rather outdated https://github.com/mbr/flask-bootstrap.
It has a base template that gets automatically used:
https://github.com/mbr/flask-bootstrap/blob/master/flask_bootstrap/templates/bootstrap/base.html
We use this Flask extension here: https://github.com/conbench/conbench/blob/fbf5a6ce898532d44327e3ac961f5abb852843bc/conbench/extensions.py#L5
What is this extension good for?
> Flask-Bootstrap packages [Bootstrap](http://getbootstrap.com/) into an extension that mostly consists of a blueprint named 'bootstrap'. It can also create links to serve Bootstrap from a CDN and works with no boilerplate code in your application.
That's ambiguous. Value unclear right now. Results in complexity that is hard to see through.
When I look at the HTML served by conbench.ursa.dev then we can see that the bootstrap CSS is included twice, from two different CDNs:
```
<!-- Bootstrap -->
<link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet">
<link rel="shortcut icon" href="/static/favicon.ico?q=1677772058">
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" type="text/css" />
```
I think the first one is included by Flask-Bootstrap, and the second one is manually included here:
https://github.com/conbench/conbench/blob/fbf5a6ce898532d44327e3ac961f5abb852843bc/conbench/templates/app.html#L10
| main | ui html outest pattern confuses why flask bootstrap bootstrap css is included twice we seem to make use of the rather outdated it has a base template that gets automatically used we use this flask extension here what is this extension good for flask bootstrap packages into an extension that mostly consists of a blueprint named bootstrap it can also create links to serve bootstrap from a cdn and works with no boilerplate code in your application that s ambiguous value unclear right now results in complexity that is hard to see through when i look at the html served by conbench ursa dev then we can see that the bootstrap css is included twice from two different cdns i think the first one is included by flask bootstrap and the second one is manually included here | 1 |
3,237 | 12,368,706,591 | IssuesEvent | 2020-05-18 14:13:30 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | Big Pouch is voiding inventory | Version not Maintainted | I haven't found out why it happens, it is inconsistent, but the Big Pouch is sometimes completely empty when I start the world. It is voiding my stuff.
Version tinyprogressions-1.12.2-3.3.34-Release
In the Antimatter Chemistry modPack version 1.2.13 | True | Big Pouch is voiding inventory - I haven't found out why it happens, it is inconsistent, but the Big Pouch is sometimes completely empty when I start the world. It is voiding my stuff.
Version tinyprogressions-1.12.2-3.3.34-Release
In the Antimatter Chemistry modPack version 1.2.13 | main | big pouch is voiding inventory i haven t found out why it happens it is inconsistent but the big pouch is sometimes completely empty when i start the world it is voiding my stuff version tinyprogressions release in the antimatter chemistry modpack version | 1 |
161,526 | 25,354,493,380 | IssuesEvent | 2022-11-20 06:42:44 | BedalFriend/BaedalFriend-FE | https://api.github.com/repos/BedalFriend/BaedalFriend-FE | closed | 메인페이지 마크업, CSS | Design | ## Description
> 메인 (홈) 페이지 마크업, CSS
## Progress
- [x] 상단 캐러셀
- [x] 중단 검색바, 카테고리
- [x] 하단 마감임박 리스트
| 1.0 | 메인페이지 마크업, CSS - ## Description
> 메인 (홈) 페이지 마크업, CSS
## Progress
- [x] 상단 캐러셀
- [x] 중단 검색바, 카테고리
- [x] 하단 마감임박 리스트
| non_main | 메인페이지 마크업 css description 메인 홈 페이지 마크업 css progress 상단 캐러셀 중단 검색바 카테고리 하단 마감임박 리스트 | 0 |
429,598 | 30,084,823,479 | IssuesEvent | 2023-06-29 07:50:22 | garraflavatra/rolens | https://api.github.com/repos/garraflavatra/rolens | closed | Docs: update shortcut information | documentation | This information is still correct for Rolens 0.2.1, but it should be updated in 0.3.0.
https://github.com/garraflavatra/rolens/blob/8a7518532d44cbafaf31fd8094d309d5ecff596c/website/data/shortcuts.json#L2-L10 | 1.0 | Docs: update shortcut information - This information is still correct for Rolens 0.2.1, but it should be updated in 0.3.0.
https://github.com/garraflavatra/rolens/blob/8a7518532d44cbafaf31fd8094d309d5ecff596c/website/data/shortcuts.json#L2-L10 | non_main | docs update shortcut information this information is still correct for rolens but it should be updated in | 0 |
4,285 | 21,558,497,215 | IssuesEvent | 2022-04-30 20:44:40 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | After saving, the cell should display the saved value from Postgres instead of the user input | type: bug work: frontend status: ready restricted: maintainers | ## Reproduce
1. Set up a number column with "Decimal Places = 2 and "Max digits" = 1000.
1. Edit a cell in this column. Enter `1.23` and save. Observe `1.23` to be displayed. Refresh the table. Observe `1.23` to be displayed. This is all good.
1. Now edit the cell and change the value to `1.239` and save.
1. Expect `1.24` to be displayed after the value is saved, because the saved value is rounded by Postgres.
1. Observe `1.239` to be displayed, as the user entered it.
1. Refresh the table, and observe `1.24` to be displayed, as stored in Postgres.
| True | After saving, the cell should display the saved value from Postgres instead of the user input - ## Reproduce
1. Set up a number column with "Decimal Places = 2 and "Max digits" = 1000.
1. Edit a cell in this column. Enter `1.23` and save. Observe `1.23` to be displayed. Refresh the table. Observe `1.23` to be displayed. This is all good.
1. Now edit the cell and change the value to `1.239` and save.
1. Expect `1.24` to be displayed after the value is saved, because the saved value is rounded by Postgres.
1. Observe `1.239` to be displayed, as the user entered it.
1. Refresh the table, and observe `1.24` to be displayed, as stored in Postgres.
| main | after saving the cell should display the saved value from postgres instead of the user input reproduce set up a number column with decimal places and max digits edit a cell in this column enter and save observe to be displayed refresh the table observe to be displayed this is all good now edit the cell and change the value to and save expect to be displayed after the value is saved because the saved value is rounded by postgres observe to be displayed as the user entered it refresh the table and observe to be displayed as stored in postgres | 1 |
666,755 | 22,366,618,071 | IssuesEvent | 2022-06-16 05:13:49 | nakhll-company/nakhll_backend | https://api.github.com/repos/nakhll-company/nakhll_backend | closed | خطای ای پی ای پروفایل | bug Priority 2 | در ای پی ای پروفایل اگر بخواهیم به تنهایی عکس پروفایل را اپدیت کنیم با خطا مواجه میشویم
حتما باید تاریخ تولد و جنسیت نیز وارد شوند
http://localhost:8000/api/v1/profile/edit_me/ | 1.0 | خطای ای پی ای پروفایل - در ای پی ای پروفایل اگر بخواهیم به تنهایی عکس پروفایل را اپدیت کنیم با خطا مواجه میشویم
حتما باید تاریخ تولد و جنسیت نیز وارد شوند
http://localhost:8000/api/v1/profile/edit_me/ | non_main | خطای ای پی ای پروفایل در ای پی ای پروفایل اگر بخواهیم به تنهایی عکس پروفایل را اپدیت کنیم با خطا مواجه میشویم حتما باید تاریخ تولد و جنسیت نیز وارد شوند | 0 |
1,047 | 4,861,426,555 | IssuesEvent | 2016-11-14 08:47:04 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | firewalld requirements on Centos 7 | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
firewalld module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
- Centos 7
##### SUMMARY
The [module documentation](http://docs.ansible.com/ansible/firewalld_module.html) states that: _Requires the python2 bindings of firewalld, which may not be installed by default if the distribution switched to python 3_ but these python2 bindings of firewalld do not seem to be available on Centos 7 and makes its execution fail.
##### STEPS TO REPRODUCE
Execute the following statement:
```
- name: Allow http port in firewall
firewalld:
state: enabled
permanent: true
port: "{{ apache_http_port }}/tcp"
```
##### EXPECTED RESULTS
```
TASK [ansible-apache : Allow http port in firewall]
ok: [ansible]
```
##### ACTUAL RESULTS
```
fatal: [ansible]: FAILED! => {"changed": false, "failed": true, "msg": "firewalld and its python 2 module are required for this module"}
``` | True | firewalld requirements on Centos 7 - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
firewalld module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
- Centos 7
##### SUMMARY
The [module documentation](http://docs.ansible.com/ansible/firewalld_module.html) states that: _Requires the python2 bindings of firewalld, which may not be installed by default if the distribution switched to python 3_ but these python2 bindings of firewalld do not seem to be available on Centos 7 and makes its execution fail.
##### STEPS TO REPRODUCE
Execute the following statement:
```
- name: Allow http port in firewall
firewalld:
state: enabled
permanent: true
port: "{{ apache_http_port }}/tcp"
```
##### EXPECTED RESULTS
```
TASK [ansible-apache : Allow http port in firewall]
ok: [ansible]
```
##### ACTUAL RESULTS
```
fatal: [ansible]: FAILED! => {"changed": false, "failed": true, "msg": "firewalld and its python 2 module are required for this module"}
``` | main | firewalld requirements on centos issue type bug report component name firewalld module ansible version ansible config file configured module search path default w o overrides os environment centos summary the states that requires the bindings of firewalld which may not be installed by default if the distribution switched to python but these bindings of firewalld do not seem to be available on centos and makes its execution fail steps to reproduce execute the following statement name allow http port in firewall firewalld state enabled permanent true port apache http port tcp expected results task ok actual results fatal failed changed false failed true msg firewalld and its python module are required for this module | 1 |
1,837 | 6,577,368,886 | IssuesEvent | 2017-09-12 00:25:35 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Docker module: argument memory_limit is of type <type 'str'> and we were unable to convert to int" on Ansible 2.0.2.0-1.el7 | affects_2.0 bug_report cloud docker waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
_docker
##### ANSIBLE VERSION
```
ansible 2.0.2.0
```
##### OS / ENVIRONMENT
Centos 7
##### SUMMARY
After upgrade to ansible 2.0.2.0 it's not possible to enter memory_limit as human readable string (ie. 265MB) only bytes are accepted.
##### STEPS TO REPRODUCE
try set memory_limit: 256MB in docker container task
```
- name: sphinx container
docker:
name: sphinx
image: michalzubkowicz/docker-sphinxsearch
state: started
restart_policy: always
memory_limit: 256MB
```
##### EXPECTED RESULTS
Should accept string as in earlier versions
##### ACTUAL RESULTS
Is showing error
```
argument memory_limit is of type <type 'str'> and we were unable to convert to int
```
| True | Docker module: argument memory_limit is of type <type 'str'> and we were unable to convert to int" on Ansible 2.0.2.0-1.el7 - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
_docker
##### ANSIBLE VERSION
```
ansible 2.0.2.0
```
##### OS / ENVIRONMENT
Centos 7
##### SUMMARY
After upgrade to ansible 2.0.2.0 it's not possible to enter memory_limit as human readable string (ie. 265MB) only bytes are accepted.
##### STEPS TO REPRODUCE
try set memory_limit: 256MB in docker container task
```
- name: sphinx container
docker:
name: sphinx
image: michalzubkowicz/docker-sphinxsearch
state: started
restart_policy: always
memory_limit: 256MB
```
##### EXPECTED RESULTS
Should accept string as in earlier versions
##### ACTUAL RESULTS
Is showing error
```
argument memory_limit is of type <type 'str'> and we were unable to convert to int
```
| main | docker module argument memory limit is of type and we were unable to convert to int on ansible issue type bug report component name docker ansible version ansible os environment centos summary after upgrade to ansible it s not possible to enter memory limit as human readable string ie only bytes are accepted steps to reproduce try set memory limit in docker container task name sphinx container docker name sphinx image michalzubkowicz docker sphinxsearch state started restart policy always memory limit expected results should accept string as in earlier versions actual results is showing error argument memory limit is of type and we were unable to convert to int | 1 |
38,387 | 6,669,161,723 | IssuesEvent | 2017-10-03 18:20:28 | spring-cloud/spring-cloud-dataflow | https://api.github.com/repos/spring-cloud/spring-cloud-dataflow | closed | Document the available options for Maven "update-policy" | documentation in pr | As a user, I'd like to know all the `update-policy` options applicable for maven artifacts. Perhaps we could point to this link [here](https://maven.apache.org/ref/3.5.0/maven-artifact/apidocs/org/apache/maven/artifact/repository/ArtifactRepositoryPolicy.html)?
The current [docs](https://docs.spring.io/spring-cloud-dataflow/docs/1.3.0.M2/reference/htmlsingle/#getting-started-maven-configuration) on this subject only point to the steps to override. | 1.0 | Document the available options for Maven "update-policy" - As a user, I'd like to know all the `update-policy` options applicable for maven artifacts. Perhaps we could point to this link [here](https://maven.apache.org/ref/3.5.0/maven-artifact/apidocs/org/apache/maven/artifact/repository/ArtifactRepositoryPolicy.html)?
The current [docs](https://docs.spring.io/spring-cloud-dataflow/docs/1.3.0.M2/reference/htmlsingle/#getting-started-maven-configuration) on this subject only point to the steps to override. | non_main | document the available options for maven update policy as a user i d like to know all the update policy options applicable for maven artifacts perhaps we could point to this link the current on this subject only point to the steps to override | 0 |
362,663 | 25,384,827,415 | IssuesEvent | 2022-11-21 20:53:51 | honeycombio/honeycomb-opentelemetry-node | https://api.github.com/repos/honeycombio/honeycomb-opentelemetry-node | closed | docs: add sdk to one service in 🍳 example greeting service 🍳 | type: documentation | **Is your feature request related to a problem? Please describe.**
To help ensure interoperability between beelines and vanilla OTel and this SDK, update one of the services in [Example Greeting Service](https://github.com/honeycombio/example-greeting-service/tree/main/node) to use the new honeycomb SDK.
| 1.0 | docs: add sdk to one service in 🍳 example greeting service 🍳 - **Is your feature request related to a problem? Please describe.**
To help ensure interoperability between beelines and vanilla OTel and this SDK, update one of the services in [Example Greeting Service](https://github.com/honeycombio/example-greeting-service/tree/main/node) to use the new honeycomb SDK.
| non_main | docs add sdk to one service in 🍳 example greeting service 🍳 is your feature request related to a problem please describe to help ensure interoperability between beelines and vanilla otel and this sdk update one of the services in to use the new honeycomb sdk | 0 |
267,899 | 8,394,291,200 | IssuesEvent | 2018-10-09 23:51:22 | gctools-outilsgc/gcpedia | https://api.github.com/repos/gctools-outilsgc/gcpedia | closed | Certain users cannot request a password reset. | Priority: Low Project: Legacy Tools [zube]: In Review bug | As a user, password resets are showing the following error:

| 1.0 | Certain users cannot request a password reset. - As a user, password resets are showing the following error:

| non_main | certain users cannot request a password reset as a user password resets are showing the following error | 0 |
1,082 | 4,931,281,073 | IssuesEvent | 2016-11-28 09:40:42 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | vmware_vm_shell: "The guest operations agent could not be contacted" | affects_2.2 bug_report cloud vmware waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_vm_shell
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/mikecali/dao-infra/ansible.cfg
configured module search path = ['./library']
Python 2.7.5
Vsphere 6.0
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
RHEL 7.2
##### SUMMARY
vmware_vm_shell module seems unable to find the vm shell on Vcenter even with successful login to vsphere. I am using this module to execute a script that is pre-loaded to the template that I baked using packer.
##### STEPS TO REPRODUCE
running playbook below
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Reconfigure VM network
hosts: localhost
gather_facts: False
vars:
vsphere: alabs
instance_set: chorus-7.2
pre_tasks:
- include_vars: ../../secrets/{{vsphere}}-vsphere.yml
- include_vars: ../../vars/global-constants.yml
- include_vars: ../../vars/{{instance_set}}-instances.yml
tasks:
- name: Reconfigure instance neworking
vmware_vm_shell:
hostname: 172.17.203.6
username: create_vm
password: 'xxxxxxx'
datacenter: ALABS-xxxx
vm_id: RH-BUILD2
vm_id_type: vm_name
# validate_certs: false
vm_username: rhel
vm_password: *****
vm_shell: /dnz_support/ngito-reconfigure-network.sh
vm_shell_args: " >> reconfigure-networking.log"
vm_shell_env:
- "RECONFIGURE_IPADDR=172.17.203.10"
- "RECONFIGURE_NETMASK=255.255.255.192"
- "RECONFIGURE_GW=172.17.203.1"
- "RECONFIGURE_NAME=ens32"
- "RECONFIGURE_DEVICE=ens32"
- "RECONFIGURE_ONBOOT=yes"
- "RECONFIGURE_TYPE=Ethernet"
- "RECONFIGURE_BOOTPROTO=none"
- "REBOOT_ON_RECONFIGURE=true"
vm_shell_cwd: "/tmp"
```
##### EXPECTED RESULTS
vmware_vm_shell should login to the server using the username and password provided and execute the script /dnz_support/ngito-reconfigure-network.sh which is preloaded to the OS and then reboot.
##### ACTUAL RESULTS
PLAY [Reconfigure VM network] **************************************************
```
TASK [Reconfigure instance neworking] ******************************************
task path: /home/mikecali/dao-infra/playbooks/vcenter/guest-create-pysphere.yml:85
Using module file /home/mikecali/dao-infra/library/vmware_vm_shell.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: mikecali
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532 `" && echo ansible-tmp-1478813226.44-34711828301532="` echo $HOME/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532 `" ) && sleep 0'
<localhost> PUT /tmp/tmpTmMbHC TO /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/ /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py; rm -rf "/home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"datacenter": "ALABS-Wellington",
"hostname": "172.17.203.6",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "administrator@vsphere.local",
"validate_certs": true,
"vm_id": "RH-BUILD2",
"vm_id_type": "vm_name",
"vm_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"vm_shell": "/dnz_support/ngito-reconfigure-network.sh",
"vm_shell_args": " >> reconfigure-networking.log",
"vm_shell_cwd": "/tmp",
"vm_shell_env": [
"RECONFIGURE_IPADDR=172.17.203.10",
"RECONFIGURE_NETMASK=255.255.255.192",
"RECONFIGURE_GW=172.17.203.1",
"RECONFIGURE_NAME=ens32",
"RECONFIGURE_DEVICE=ens32",
"RECONFIGURE_ONBOOT=yes",
"RECONFIGURE_TYPE=Ethernet",
"RECONFIGURE_BOOTPROTO=none",
"REBOOT_ON_RECONFIGURE=true"
],
"vm_username": "rhel"
},
"module_name": "vmware_vm_shell"
},
"msg": "The guest operations agent could not be contacted."
}
to retry, use: --limit @/home/mikecali/dao-infra/playbooks/vcenter/guest-create-pysphere.retry
PLAY RECAP *********************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=1
```
| True | vmware_vm_shell: "The guest operations agent could not be contacted" - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_vm_shell
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/mikecali/dao-infra/ansible.cfg
configured module search path = ['./library']
Python 2.7.5
Vsphere 6.0
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
RHEL 7.2
##### SUMMARY
vmware_vm_shell module seems unable to find the vm shell on Vcenter even with successful login to vsphere. I am using this module to execute a script that is pre-loaded to the template that I baked using packer.
##### STEPS TO REPRODUCE
running playbook below
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Reconfigure VM network
hosts: localhost
gather_facts: False
vars:
vsphere: alabs
instance_set: chorus-7.2
pre_tasks:
- include_vars: ../../secrets/{{vsphere}}-vsphere.yml
- include_vars: ../../vars/global-constants.yml
- include_vars: ../../vars/{{instance_set}}-instances.yml
tasks:
- name: Reconfigure instance neworking
vmware_vm_shell:
hostname: 172.17.203.6
username: create_vm
password: 'xxxxxxx'
datacenter: ALABS-xxxx
vm_id: RH-BUILD2
vm_id_type: vm_name
# validate_certs: false
vm_username: rhel
vm_password: *****
vm_shell: /dnz_support/ngito-reconfigure-network.sh
vm_shell_args: " >> reconfigure-networking.log"
vm_shell_env:
- "RECONFIGURE_IPADDR=172.17.203.10"
- "RECONFIGURE_NETMASK=255.255.255.192"
- "RECONFIGURE_GW=172.17.203.1"
- "RECONFIGURE_NAME=ens32"
- "RECONFIGURE_DEVICE=ens32"
- "RECONFIGURE_ONBOOT=yes"
- "RECONFIGURE_TYPE=Ethernet"
- "RECONFIGURE_BOOTPROTO=none"
- "REBOOT_ON_RECONFIGURE=true"
vm_shell_cwd: "/tmp"
```
##### EXPECTED RESULTS
vmware_vm_shell should login to the server using the username and password provided and execute the script /dnz_support/ngito-reconfigure-network.sh which is preloaded to the OS and then reboot.
##### ACTUAL RESULTS
PLAY [Reconfigure VM network] **************************************************
```
TASK [Reconfigure instance neworking] ******************************************
task path: /home/mikecali/dao-infra/playbooks/vcenter/guest-create-pysphere.yml:85
Using module file /home/mikecali/dao-infra/library/vmware_vm_shell.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: mikecali
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532 `" && echo ansible-tmp-1478813226.44-34711828301532="` echo $HOME/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532 `" ) && sleep 0'
<localhost> PUT /tmp/tmpTmMbHC TO /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/ /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/vmware_vm_shell.py; rm -rf "/home/mikecali/.ansible/tmp/ansible-tmp-1478813226.44-34711828301532/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"datacenter": "ALABS-Wellington",
"hostname": "172.17.203.6",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "administrator@vsphere.local",
"validate_certs": true,
"vm_id": "RH-BUILD2",
"vm_id_type": "vm_name",
"vm_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"vm_shell": "/dnz_support/ngito-reconfigure-network.sh",
"vm_shell_args": " >> reconfigure-networking.log",
"vm_shell_cwd": "/tmp",
"vm_shell_env": [
"RECONFIGURE_IPADDR=172.17.203.10",
"RECONFIGURE_NETMASK=255.255.255.192",
"RECONFIGURE_GW=172.17.203.1",
"RECONFIGURE_NAME=ens32",
"RECONFIGURE_DEVICE=ens32",
"RECONFIGURE_ONBOOT=yes",
"RECONFIGURE_TYPE=Ethernet",
"RECONFIGURE_BOOTPROTO=none",
"REBOOT_ON_RECONFIGURE=true"
],
"vm_username": "rhel"
},
"module_name": "vmware_vm_shell"
},
"msg": "The guest operations agent could not be contacted."
}
to retry, use: --limit @/home/mikecali/dao-infra/playbooks/vcenter/guest-create-pysphere.retry
PLAY RECAP *********************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=1
```
| main | vmware vm shell the guest operations agent could not be contacted issue type bug report component name vmware vm shell ansible version ansible config file home mikecali dao infra ansible cfg configured module search path python vsphere configuration default configuration os environment rhel summary vmware vm shell module seems unable to find the vm shell on vcenter even with successful login to vsphere i am using this module to execute a script that is pre loaded to the template that i baked using packer steps to reproduce running playbook below name reconfigure vm network hosts localhost gather facts false vars vsphere alabs instance set chorus pre tasks include vars secrets vsphere vsphere yml include vars vars global constants yml include vars vars instance set instances yml tasks name reconfigure instance neworking vmware vm shell hostname username create vm password xxxxxxx datacenter alabs xxxx vm id rh vm id type vm name validate certs false vm username rhel vm password vm shell dnz support ngito reconfigure network sh vm shell args reconfigure networking log vm shell env reconfigure ipaddr reconfigure netmask reconfigure gw reconfigure name reconfigure device reconfigure onboot yes reconfigure type ethernet reconfigure bootproto none reboot on reconfigure true vm shell cwd tmp expected results vmware vm shell should login to the server using the username and password provided and execute the script dnz support ngito reconfigure network sh which is preloaded to the os and then reboot actual results play task task path home mikecali dao infra playbooks vcenter guest create pysphere yml using module file home mikecali dao infra library vmware vm shell py establish local connection for user mikecali exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmptmmbhc to home mikecali ansible tmp ansible tmp vmware vm shell py exec bin sh c chmod u x home mikecali ansible tmp ansible tmp home mikecali ansible tmp ansible tmp vmware vm shell py sleep exec bin sh c usr bin python home mikecali ansible tmp ansible tmp vmware vm shell py rm rf home mikecali ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args datacenter alabs wellington hostname password value specified in no log parameter username administrator vsphere local validate certs true vm id rh vm id type vm name vm password value specified in no log parameter vm shell dnz support ngito reconfigure network sh vm shell args reconfigure networking log vm shell cwd tmp vm shell env reconfigure ipaddr reconfigure netmask reconfigure gw reconfigure name reconfigure device reconfigure onboot yes reconfigure type ethernet reconfigure bootproto none reboot on reconfigure true vm username rhel module name vmware vm shell msg the guest operations agent could not be contacted to retry use limit home mikecali dao infra playbooks vcenter guest create pysphere retry play recap localhost ok changed unreachable failed | 1 |
809 | 4,425,974,233 | IssuesEvent | 2016-08-16 16:55:35 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | PHP Cheat Sheet: missing functions | Maintainer Input Requested PR Received | Some feedback from a DDG user:
> no often used builtin php functions mentioned here
What else can be added to make this more useful?
------
IA Page: http://duck.co/ia/view/php_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sahildua2305 | True | PHP Cheat Sheet: missing functions - Some feedback from a DDG user:
> no often used builtin php functions mentioned here
What else can be added to make this more useful?
------
IA Page: http://duck.co/ia/view/php_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sahildua2305 | main | php cheat sheet missing functions some feedback from a ddg user no often used builtin php functions mentioned here what else can be added to make this more useful ia page | 1 |
5,815 | 30,792,230,055 | IssuesEvent | 2023-07-31 17:01:44 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | closed | Wise - List transfers for a profile | templates maintainer | This notebook lists all the transfers for a given profile using the Wise API. It is usefull for organizations to keep track of their transfers.
| True | Wise - List transfers for a profile - This notebook lists all the transfers for a given profile using the Wise API. It is usefull for organizations to keep track of their transfers.
| main | wise list transfers for a profile this notebook lists all the transfers for a given profile using the wise api it is usefull for organizations to keep track of their transfers | 1 |
261,538 | 8,237,130,011 | IssuesEvent | 2018-09-10 00:41:39 | PerfectWeek/web-api | https://api.github.com/repos/PerfectWeek/web-api | closed | Change loggedOnly responses code | Priority: Medium Status: In Progress Type: Maintenance | Change response code when the given token is invalid from `400` to `401` | 1.0 | Change loggedOnly responses code - Change response code when the given token is invalid from `400` to `401` | non_main | change loggedonly responses code change response code when the given token is invalid from to | 0 |
1,968 | 6,694,169,191 | IssuesEvent | 2017-10-10 00:03:43 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Forecast: Add support for the hourly forecast | Maintainer Approved Suggestion | It would be very beneficial to support viewing of an hourly forecast instead of the generic weather IA. I typed [hourly forecast green bay wi](https://duckduckgo.com/?q=hourly+forecast+green+bay+wi) (the city that I live in) and I got a generic weather IA.
---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
| True | Forecast: Add support for the hourly forecast - It would be very beneficial to support viewing of an hourly forecast instead of the generic weather IA. I typed [hourly forecast green bay wi](https://duckduckgo.com/?q=hourly+forecast+green+bay+wi) (the city that I live in) and I got a generic weather IA.
---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
| main | forecast add support for the hourly forecast it would be very beneficial to support viewing of an hourly forecast instead of the generic weather ia i typed the city that i live in and i got a generic weather ia ia page | 1 |
1,227 | 5,219,572,367 | IssuesEvent | 2017-01-26 19:27:27 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | add purge_routes option to ec2_vpc_route_table | affects_2.0 aws bug_report cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
- Feature Idea
##### Plugin Name:
ec2_vpc_route_table
##### Ansible Version:
```
ansible 2.0.0.2
```
##### Ansible Configuration:
No tweaks
##### Environment:
Ubuntu 15.10
##### Summary:
From #1511:
Could we allow for routes not to get deleted, if asked?
ec2_group has options purge_rules and purge_rules_egress, which default to true. If you set them to false, the module won't delete existing rules that are not found in the fields.
Use case for this: you create a new route table for Kubernetes. Kubernetes will then add a new route for each node's own subnet as they come online. You try to rerun your playbook and the nodes become unreachable because Ansible has deleted the routes (Kubernetes will still recreate them, but it might take ten seconds). Having a `purge_routes` that I could set to false would help.
| True | add purge_routes option to ec2_vpc_route_table - ##### ISSUE TYPE
- Bug Report
- Feature Idea
##### Plugin Name:
ec2_vpc_route_table
##### Ansible Version:
```
ansible 2.0.0.2
```
##### Ansible Configuration:
No tweaks
##### Environment:
Ubuntu 15.10
##### Summary:
From #1511:
Could we allow for routes not to get deleted, if asked?
ec2_group has options purge_rules and purge_rules_egress, which default to true. If you set them to false, the module won't delete existing rules that are not found in the fields.
Use case for this: you create a new route table for Kubernetes. Kubernetes will then add a new route for each node's own subnet as they come online. You try to rerun your playbook and the nodes become unreachable because Ansible has deleted the routes (Kubernetes will still recreate them, but it might take ten seconds). Having a `purge_routes` that I could set to false would help.
| main | add purge routes option to vpc route table issue type bug report feature idea plugin name vpc route table ansible version ansible ansible configuration no tweaks environment ubuntu summary from could we allow for routes not to get deleted if asked group has options purge rules and purge rules egress which default to true if you set them to false the module won t delete existing rules that are not found in the fields use case for this you create a new route table for kubernetes kubernetes will then add a new route for each node s own subnet as they come online you try to rerun your playbook and the nodes become unreachable because ansible has deleted the routes kubernetes will still recreate them but it might take ten seconds having a purge routes that i could set to false would help | 1 |
2,509 | 8,655,459,870 | IssuesEvent | 2018-11-27 16:00:31 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | systray doesn't seem to show up on debian | unmaintained | I know appindicator is no longre built but for some reason it doesn't show up I get notifications but the icon isn't shown. I do have other QT based apps that I guess are using systray and their icons are shown easily.
I am on debian and the startup seems normal.
Starting Qcma 0.4.1
PTP: Opening session
Total entries added to the database: 429
Vita connected, id: 4471254501417XXXX
I am curious if it saying PTP i the reason behind the io being ~4MB/s with slight peaks at ~5MB/s with random stops and starts.
Fwiw My vita is on 3.51
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Package: qcma
Status: install ok installed
Priority: extra
Section: utils
Installed-Size: 595
Maintainer: codestation
Architecture: amd64
Version: 0.4.1
I should also say that the thing without the option to get to settings I cant seem to updat emy vita I kept putting off updating to 3.6 as I cant live without pspkvm(rarely play it but still need it)
I tried to update to 3.6 a ltitle while after henaku came out and it would've worked I guess but I decided to hold off for awhile and now it won't work for some reason so I don't know if it's a weird libraries conflict or something. | True | systray doesn't seem to show up on debian - I know appindicator is no longre built but for some reason it doesn't show up I get notifications but the icon isn't shown. I do have other QT based apps that I guess are using systray and their icons are shown easily.
I am on debian and the startup seems normal.
Starting Qcma 0.4.1
PTP: Opening session
Total entries added to the database: 429
Vita connected, id: 4471254501417XXXX
I am curious if it saying PTP i the reason behind the io being ~4MB/s with slight peaks at ~5MB/s with random stops and starts.
Fwiw My vita is on 3.51
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Package: qcma
Status: install ok installed
Priority: extra
Section: utils
Installed-Size: 595
Maintainer: codestation
Architecture: amd64
Version: 0.4.1
I should also say that the thing without the option to get to settings I cant seem to updat emy vita I kept putting off updating to 3.6 as I cant live without pspkvm(rarely play it but still need it)
I tried to update to 3.6 a ltitle while after henaku came out and it would've worked I guess but I decided to hold off for awhile and now it won't work for some reason so I don't know if it's a weird libraries conflict or something. | main | systray doesn t seem to show up on debian i know appindicator is no longre built but for some reason it doesn t show up i get notifications but the icon isn t shown i do have other qt based apps that i guess are using systray and their icons are shown easily i am on debian and the startup seems normal starting qcma ptp opening session total entries added to the database vita connected id i am curious if it saying ptp i the reason behind the io being s with slight peaks at s with random stops and starts fwiw my vita is on distributor id debian description debian gnu linux jessie release codename jessie package qcma status install ok installed priority extra section utils installed size maintainer codestation architecture version i should also say that the thing without the option to get to settings i cant seem to updat emy vita i kept putting off updating to as i cant live without pspkvm rarely play it but still need it i tried to update to a ltitle while after henaku came out and it would ve worked i guess but i decided to hold off for awhile and now it won t work for some reason so i don t know if it s a weird libraries conflict or something | 1 |
78,306 | 15,569,961,332 | IssuesEvent | 2021-03-17 01:23:57 | jrrk/riscv-linux | https://api.github.com/repos/jrrk/riscv-linux | opened | CVE-2019-14898 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi | security vulnerability | ## CVE-2019-14898 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-amlogicv4.18</b>, <b>aspeedaspeed-4.19-devicetree-no-fsi</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2019-11599, affecting the Linux kernel before 5.0.10 was not complete. A local user could use this flaw to obtain sensitive information, cause a denial of service, or possibly have other unspecified impacts by triggering a race condition with mmget_not_zero or get_task_mm calls.
<p>Publish Date: 2020-05-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14898>CVE-2019-14898</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12637">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12637</a></p>
<p>Release Date: 2020-05-08</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-14898 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi - ## CVE-2019-14898 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-amlogicv4.18</b>, <b>aspeedaspeed-4.19-devicetree-no-fsi</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2019-11599, affecting the Linux kernel before 5.0.10 was not complete. A local user could use this flaw to obtain sensitive information, cause a denial of service, or possibly have other unspecified impacts by triggering a race condition with mmget_not_zero or get_task_mm calls.
<p>Publish Date: 2020-05-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14898>CVE-2019-14898</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12637">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12637</a></p>
<p>Release Date: 2020-05-08</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in linux aspeedaspeed devicetree no fsi cve high severity vulnerability vulnerable libraries linux aspeedaspeed devicetree no fsi vulnerability details the fix for cve affecting the linux kernel before was not complete a local user could use this flaw to obtain sensitive information cause a denial of service or possibly have other unspecified impacts by triggering a race condition with mmget not zero or get task mm calls publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
161,348 | 12,541,404,905 | IssuesEvent | 2020-06-05 12:17:10 | dasch-swiss/knora-app | https://api.github.com/repos/dasch-swiss/knora-app | closed | Search list results : why not allowing a right-click on a result to open it in a new window | enhancement user-testing | **Describe the bug**
Spontaneously, I would like to be able to right-click on a search result to open it in a new window or tab.
**To Reproduce Steps to reproduce the behavior:**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**OPTIONAL: Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem (drag-and-drop the image directly here).
**Desktop (please complete the following information):**
- OS: macOS 10.14.6 (18G103), French
- Browser Firefox
- Version 70.01.1
**Additional context**
Add any other context about the problem here.
| 1.0 | Search list results : why not allowing a right-click on a result to open it in a new window - **Describe the bug**
Spontaneously, I would like to be able to right-click on a search result to open it in a new window or tab.
**To Reproduce Steps to reproduce the behavior:**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**OPTIONAL: Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem (drag-and-drop the image directly here).
**Desktop (please complete the following information):**
- OS: macOS 10.14.6 (18G103), French
- Browser Firefox
- Version 70.01.1
**Additional context**
Add any other context about the problem here.
| non_main | search list results why not allowing a right click on a result to open it in a new window describe the bug spontaneously i would like to be able to right click on a search result to open it in a new window or tab to reproduce steps to reproduce the behavior go to click on scroll down to see error optional expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem drag and drop the image directly here desktop please complete the following information os macos french browser firefox version additional context add any other context about the problem here | 0 |
2,596 | 8,823,629,400 | IssuesEvent | 2019-01-02 14:21:32 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | closed | Breaking change in waitFor().file(String) | Prio: High Type: Maintainance | **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the file wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().file("/path/to/file");
```
**API after change**
```java
waitFor().file().path("/path/to/file");
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540L122
BR,
Sven | True | Breaking change in waitFor().file(String) - **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the file wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().file("/path/to/file");
```
**API after change**
```java
waitFor().file().path("/path/to/file");
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540L122
BR,
Sven | main | breaking change in waitfor file string citrus version description if you upgrade your citrus version to or higher we ve a breaking change in the file wait builder api we ll correct this with one of the future releases to ensure effortless version upgrades api before change java waitfor file path to file api after change java waitfor file path path to file additional information issue commit br sven | 1 |
3,821 | 16,618,465,906 | IssuesEvent | 2021-06-02 20:06:54 | microsoft/DirectXTK12 | https://api.github.com/repos/microsoft/DirectXTK12 | closed | Shader Model 6 usage by default | maintainence | The FXC.EXE compiler is considered legacy, and for Direct3D 12 programs the recommendation is to use the new DXC.EXE (DXIL) compiler toolset instead.
The ``CompileShaders.cmd`` script has supported this for a while with the argument ``dxil``, but the projects continue to build the Shader Model 5.1 shader set by default.
The DirectXTK12 projects should use Shader Model 6 / DXC by default.
> Note that I already updated the directx-vs-templates back in [May 2021](https://github.com/walbourn/directx-vs-templates/releases/tag/may2021) to validate Shader Model 6 support on the device. | True | Shader Model 6 usage by default - The FXC.EXE compiler is considered legacy, and for Direct3D 12 programs the recommendation is to use the new DXC.EXE (DXIL) compiler toolset instead.
The ``CompileShaders.cmd`` script has supported this for a while with the argument ``dxil``, but the projects continue to build the Shader Model 5.1 shader set by default.
The DirectXTK12 projects should use Shader Model 6 / DXC by default.
> Note that I already updated the directx-vs-templates back in [May 2021](https://github.com/walbourn/directx-vs-templates/releases/tag/may2021) to validate Shader Model 6 support on the device. | main | shader model usage by default the fxc exe compiler is considered legacy and for programs the recommendation is to use the new dxc exe dxil compiler toolset instead the compileshaders cmd script has supported this for a while with the argument dxil but the projects continue to build the shader model shader set by default the projects should use shader model dxc by default note that i already updated the directx vs templates back in to validate shader model support on the device | 1 |
4,184 | 20,239,299,897 | IssuesEvent | 2022-02-14 07:32:25 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | BLD, MAINT: Add `packaging.version` to the repo | maintainability | Following the discussion in https://github.com/MDAnalysis/mdanalysis/pull/3527#issuecomment-1038494370 we should provide `packaging.version` to the repo to avoid an issue with dependencies.
The file to vendor can be found here: https://github.com/pypa/packaging/blob/main/packaging/version.py
From the top of my head, here are the steps to follow:
* [ ] Copy the file to the repo
* [ ] Adapt the file's license header to tell about its origin
* [ ] Adapt our LICENSE file to mention `packaging`
* [ ] Make sure we use our version of the file in `setup.py`
* [ ] Make sure we use our version of the file everywhere else (including https://github.com/MDAnalysis/mdanalysis/blob/6b3a1603b8890ec4f537cc2a26f89430665f656f/package/MDAnalysis/coordinates/chemfiles.py#L84)
This issue is a follow up to #3527 and is related to @tylerjereddy's #3526 | True | BLD, MAINT: Add `packaging.version` to the repo - Following the discussion in https://github.com/MDAnalysis/mdanalysis/pull/3527#issuecomment-1038494370 we should provide `packaging.version` to the repo to avoid an issue with dependencies.
The file to vendor can be found here: https://github.com/pypa/packaging/blob/main/packaging/version.py
From the top of my head, here are the steps to follow:
* [ ] Copy the file to the repo
* [ ] Adapt the file's license header to tell about its origin
* [ ] Adapt our LICENSE file to mention `packaging`
* [ ] Make sure we use our version of the file in `setup.py`
* [ ] Make sure we use our version of the file everywhere else (including https://github.com/MDAnalysis/mdanalysis/blob/6b3a1603b8890ec4f537cc2a26f89430665f656f/package/MDAnalysis/coordinates/chemfiles.py#L84)
This issue is a follow up to #3527 and is related to @tylerjereddy's #3526 | main | bld maint add packaging version to the repo following the discussion in we should provide packaging version to the repo to avoid an issue with dependencies the file to vendor can be found here from the top of my head here are the steps to follow copy the file to the repo adapt the file s license header to tell about its origin adapt our license file to mention packaging make sure we use our version of the file in setup py make sure we use our version of the file everywhere else including this issue is a follow up to and is related to tylerjereddy s | 1 |
4,995 | 25,708,689,778 | IssuesEvent | 2022-12-07 03:58:18 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Deployment , and rollback, fails when removing ProvisionedConcurrency configuration from Function | type/bug stage/bug-repro maintainer/need-response | If you remove the `ProvisionedConcurrencyConfig` section from an existing `AWS::Serverless::Function` resource then deployment fails with the message:
"Alias with weights can not be used with Provisioned Concurrency (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ...)"
What's especially nasty about this is that the CloudFormation rollback then fails, leaving the stack in UPDATE_ROLLBACK_FAILED state, and `sam deploy` never finishes.
I'm not impacted by this in production - I'm only experimenting - but I thought you all might want to know.
For reference my initial SAM resource was:
```
HelloWorldLambda:
Type: AWS::Serverless::Function
Properties:
Runtime: java8
MemorySize: 512
Handler: book.HelloWorld::handler
CodeUri: target/lambda.jar
AutoPublishAlias: live
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 100
```
And I then removed the `ProvisionedConcurrencyConfig` (and child `ProvisionedConcurrentExecutions`) property.
Mike | True | Deployment , and rollback, fails when removing ProvisionedConcurrency configuration from Function - If you remove the `ProvisionedConcurrencyConfig` section from an existing `AWS::Serverless::Function` resource then deployment fails with the message:
"Alias with weights can not be used with Provisioned Concurrency (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ...)"
What's especially nasty about this is that the CloudFormation rollback then fails, leaving the stack in UPDATE_ROLLBACK_FAILED state, and `sam deploy` never finishes.
I'm not impacted by this in production - I'm only experimenting - but I thought you all might want to know.
For reference my initial SAM resource was:
```
HelloWorldLambda:
Type: AWS::Serverless::Function
Properties:
Runtime: java8
MemorySize: 512
Handler: book.HelloWorld::handler
CodeUri: target/lambda.jar
AutoPublishAlias: live
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 100
```
And I then removed the `ProvisionedConcurrencyConfig` (and child `ProvisionedConcurrentExecutions`) property.
Mike | main | deployment and rollback fails when removing provisionedconcurrency configuration from function if you remove the provisionedconcurrencyconfig section from an existing aws serverless function resource then deployment fails with the message alias with weights can not be used with provisioned concurrency service awslambdainternal status code error code invalidparametervalueexception request id what s especially nasty about this is that the cloudformation rollback then fails leaving the stack in update rollback failed state and sam deploy never finishes i m not impacted by this in production i m only experimenting but i thought you all might want to know for reference my initial sam resource was helloworldlambda type aws serverless function properties runtime memorysize handler book helloworld handler codeuri target lambda jar autopublishalias live provisionedconcurrencyconfig provisionedconcurrentexecutions and i then removed the provisionedconcurrencyconfig and child provisionedconcurrentexecutions property mike | 1 |
1,648 | 6,572,678,650 | IssuesEvent | 2017-09-11 04:20:34 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ZFS module idempotence problems with received dataset properties | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zfs
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
```
hash_behavior = merge
force_handlers = True
pipelining = True
control_path = %(directory)s/%%p-%%h-%%r
```
##### OS / ENVIRONMENT
Managing from macOS 10.12
Managing Gentoo Linux hosts
##### SUMMARY
Currently, the ZFS module will only consider `local` settings while comparing the current state of a ZFS filesystem and its intended target state. This can lead to unwanted `set` operations to occur even when the target filesystem is in the state described in the play, hurting idempotence.
A good example of this is with the "mountpoint" property. If ZFS dataset `B` exists as a child of dataset `A`, but dataset `A` was previously restored from a backup with `zfs receive`, therefore having most of its properties not flagged as `local`, but as `received` instead (including `mountpoint`), a run of the ZFS module on `A` will attempt to first unmount `B` so that it can perform a `mountpoint` property change on `A` even though `A` already has its `mountpoint` property set to the correct value.
In some cases the above will fail (for example if a file is open in dataset `B` so that `B` can't be unmounted), but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn't.
I'm not sure why only `local` properties are being considered at present when comparing current dataset state and target dataset state. Unless I'm missing something, I believe a valid fix could be to simply delete the following line: https://github.com/ansible/ansible-modules-extras/blob/9760ec2538f8b44cb7f27924617a8e024a694724/system/zfs.py#L198
##### STEPS TO REPRODUCE
1. `zfs receive` any dataset containing children in place of one normally managed by an ansible play
2. run something on the server making use of a file in the one of the children datasets and keeping it open
3. run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly
##### EXPECTED RESULTS
no `zfs set` command should be issued for dataset properties which are already set to the correct value
##### ACTUAL RESULTS
`zfs set` commands are issued on the managed host to set properties to a value they are already set at, in some cases causing unwanted side effects on the managed dataset and its children.
<!--- Paste verbatim command output between quotes below -->
```
Using module file /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/extras/system/zfs.py
<xxx.yyy.zzz> ESTABLISH SSH CONNECTION FOR USER: root
<xxx.yyy.zzz> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/guillaume/.ansible/cp/%p-%h-%r xxx.yyy.zzz '/bin/sh -c '"'"'/usr/bin/python2 && sleep 0'"'"''
fatal: [xxx.yyy.zzz]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"atime": "off",
"canmount": "on",
"casesensitivity": "sensitive",
"checksum": "fletcher4",
"compression": "lz4",
"copies": "1",
"createparent": null,
"devices": "off",
"exec": "off",
"logbias": "latency",
"mountpoint": "/var/lib/mysql",
"nbmand": "off",
"normalization": "formD",
"primarycache": "metadata",
"quota": "none",
"readonly": "off",
"recordsize": "16K",
"refquota": "none",
"refreservation": "none",
"reservation": "none",
"secondarycache": "none",
"setuid": "off",
"sharenfs": "off",
"sharesmb": "off",
"snapdir": "hidden",
"utf8only": "on",
"xattr": "off"
},
"module_name": "zfs"
},
"msg": "umount: /var/lib/mysql/relay_log: target is busy\n (In some cases useful info about processes that\n use the device is found by lsof(8) or fuser(1).)\ncannot unmount '/var/lib/mysql/relay_log': umount failed\n"
}
```
| True | ZFS module idempotence problems with received dataset properties - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zfs
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
```
hash_behavior = merge
force_handlers = True
pipelining = True
control_path = %(directory)s/%%p-%%h-%%r
```
##### OS / ENVIRONMENT
Managing from macOS 10.12
Managing Gentoo Linux hosts
##### SUMMARY
Currently, the ZFS module will only consider `local` settings while comparing the current state of a ZFS filesystem and its intended target state. This can lead to unwanted `set` operations to occur even when the target filesystem is in the state described in the play, hurting idempotence.
A good example of this is with the "mountpoint" property. If ZFS dataset `B` exists as a child of dataset `A`, but dataset `A` was previously restored from a backup with `zfs receive`, therefore having most of its properties not flagged as `local`, but as `received` instead (including `mountpoint`), a run of the ZFS module on `A` will attempt to first unmount `B` so that it can perform a `mountpoint` property change on `A` even though `A` already has its `mountpoint` property set to the correct value.
In some cases the above will fail (for example if a file is open in dataset `B` so that `B` can't be unmounted), but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn't.
I'm not sure why only `local` properties are being considered at present when comparing current dataset state and target dataset state. Unless I'm missing something, I believe a valid fix could be to simply delete the following line: https://github.com/ansible/ansible-modules-extras/blob/9760ec2538f8b44cb7f27924617a8e024a694724/system/zfs.py#L198
##### STEPS TO REPRODUCE
1. `zfs receive` any dataset containing children in place of one normally managed by an ansible play
2. run something on the server making use of a file in the one of the children datasets and keeping it open
3. run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly
##### EXPECTED RESULTS
no `zfs set` command should be issued for dataset properties which are already set to the correct value
##### ACTUAL RESULTS
`zfs set` commands are issued on the managed host to set properties to a value they are already set at, in some cases causing unwanted side effects on the managed dataset and its children.
<!--- Paste verbatim command output between quotes below -->
```
Using module file /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/extras/system/zfs.py
<xxx.yyy.zzz> ESTABLISH SSH CONNECTION FOR USER: root
<xxx.yyy.zzz> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/guillaume/.ansible/cp/%p-%h-%r xxx.yyy.zzz '/bin/sh -c '"'"'/usr/bin/python2 && sleep 0'"'"''
fatal: [xxx.yyy.zzz]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"atime": "off",
"canmount": "on",
"casesensitivity": "sensitive",
"checksum": "fletcher4",
"compression": "lz4",
"copies": "1",
"createparent": null,
"devices": "off",
"exec": "off",
"logbias": "latency",
"mountpoint": "/var/lib/mysql",
"nbmand": "off",
"normalization": "formD",
"primarycache": "metadata",
"quota": "none",
"readonly": "off",
"recordsize": "16K",
"refquota": "none",
"refreservation": "none",
"reservation": "none",
"secondarycache": "none",
"setuid": "off",
"sharenfs": "off",
"sharesmb": "off",
"snapdir": "hidden",
"utf8only": "on",
"xattr": "off"
},
"module_name": "zfs"
},
"msg": "umount: /var/lib/mysql/relay_log: target is busy\n (In some cases useful info about processes that\n use the device is found by lsof(8) or fuser(1).)\ncannot unmount '/var/lib/mysql/relay_log': umount failed\n"
}
```
| main | zfs module idempotence problems with received dataset properties issue type bug report component name zfs ansible version ansible configuration hash behavior merge force handlers true pipelining true control path directory s p h r os environment managing from macos managing gentoo linux hosts summary currently the zfs module will only consider local settings while comparing the current state of a zfs filesystem and its intended target state this can lead to unwanted set operations to occur even when the target filesystem is in the state described in the play hurting idempotence a good example of this is with the mountpoint property if zfs dataset b exists as a child of dataset a but dataset a was previously restored from a backup with zfs receive therefore having most of its properties not flagged as local but as received instead including mountpoint a run of the zfs module on a will attempt to first unmount b so that it can perform a mountpoint property change on a even though a already has its mountpoint property set to the correct value in some cases the above will fail for example if a file is open in dataset b so that b can t be unmounted but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn t i m not sure why only local properties are being considered at present when comparing current dataset state and target dataset state unless i m missing something i believe a valid fix could be to simply delete the following line steps to reproduce zfs receive any dataset containing children in place of one normally managed by an ansible play run something on the server making use of a file in the one of the children datasets and keeping it open run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly expected results no zfs set command should be issued for dataset properties which are already set to the correct value actual results zfs set commands are issued on the managed host to set properties to a value they are already set at in some cases causing unwanted side effects on the managed dataset and its children using module file opt local library frameworks python framework versions lib site packages ansible modules extras system zfs py establish ssh connection for user root ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users guillaume ansible cp p h r xxx yyy zzz bin sh c usr bin sleep fatal failed changed false failed true invocation module args atime off canmount on casesensitivity sensitive checksum compression copies createparent null devices off exec off logbias latency mountpoint var lib mysql nbmand off normalization formd primarycache metadata quota none readonly off recordsize refquota none refreservation none reservation none secondarycache none setuid off sharenfs off sharesmb off snapdir hidden on xattr off module name zfs msg umount var lib mysql relay log target is busy n in some cases useful info about processes that n use the device is found by lsof or fuser ncannot unmount var lib mysql relay log umount failed n | 1 |
1,842 | 6,577,379,308 | IssuesEvent | 2017-09-12 00:30:09 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | iam.py - existing user not added to existing group | affects_1.9 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
modules/core/cloud/amazon/iam.py
##### ANSIBLE VERSION
```
ansible 1.9.4
```
##### CONFIGURATION
Default.
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core)
##### SUMMARY
The IAM user is not added to the IAM group.
##### STEPS TO REPRODUCE
```
- name: Create IAM user
iam:
iam_type: user
name: proj_user
path: '/'
state: present
- name: Add IAM user to IAM groups
iam:
iam_type: user
name: proj_user
path: '/'
state: update
groups: TestGroup
```
Followed the example given here http://docs.ansible.com/ansible/iam_module.html
##### EXPECTED RESULTS
The proj_user should become a member of TestGroup.
##### ACTUAL RESULTS
Test Group is empty but module returns status as changed which is not true.
Sequential re-runs do not change the situation.
```
TASK: [Add IAM user to IAM groups] *********************************
changed: [127.0.0.1] => {"changed": true, "groups": ["TestGroup"], "keys": {}, "user_name": "proj_user"}
```
| True | iam.py - existing user not added to existing group - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
modules/core/cloud/amazon/iam.py
##### ANSIBLE VERSION
```
ansible 1.9.4
```
##### CONFIGURATION
Default.
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core)
##### SUMMARY
The IAM user is not added to the IAM group.
##### STEPS TO REPRODUCE
```
- name: Create IAM user
iam:
iam_type: user
name: proj_user
path: '/'
state: present
- name: Add IAM user to IAM groups
iam:
iam_type: user
name: proj_user
path: '/'
state: update
groups: TestGroup
```
Followed the example given here http://docs.ansible.com/ansible/iam_module.html
##### EXPECTED RESULTS
The proj_user should become a member of TestGroup.
##### ACTUAL RESULTS
Test Group is empty but module returns status as changed which is not true.
Sequential re-runs do not change the situation.
```
TASK: [Add IAM user to IAM groups] *********************************
changed: [127.0.0.1] => {"changed": true, "groups": ["TestGroup"], "keys": {}, "user_name": "proj_user"}
```
| main | iam py existing user not added to existing group issue type bug report component name modules core cloud amazon iam py ansible version ansible configuration default os environment centos linux release core summary the iam user is not added to the iam group steps to reproduce name create iam user iam iam type user name proj user path state present name add iam user to iam groups iam iam type user name proj user path state update groups testgroup followed the example given here expected results the proj user should become a member of testgroup actual results test group is empty but module returns status as changed which is not true sequential re runs do not change the situation task changed changed true groups keys user name proj user | 1 |
8,654 | 7,545,328,935 | IssuesEvent | 2018-04-17 21:15:36 | cmu-mars/issues | https://api.github.com/repos/cmu-mars/issues | closed | which features are hidden? | CP1 infrastructure | in the first cut at https://github.com/cmu-mars/marspolyparser, at least two things were in the XML that shouldn't be part of the hidden content. so if we're thinking of the XML as specifying just the hidden components, that's a mismatch:
* budget and queries -- how much each query costs, and how much budget we have, but this may be simplified into just a unit cost which we're always burn asap
* amount of gaussian noise
these may well need to be REST endpoints in the eventual version of https://github.com/cmu-mars/brasscomms for CP1 | 1.0 | which features are hidden? - in the first cut at https://github.com/cmu-mars/marspolyparser, at least two things were in the XML that shouldn't be part of the hidden content. so if we're thinking of the XML as specifying just the hidden components, that's a mismatch:
* budget and queries -- how much each query costs, and how much budget we have, but this may be simplified into just a unit cost which we're always burn asap
* amount of gaussian noise
these may well need to be REST endpoints in the eventual version of https://github.com/cmu-mars/brasscomms for CP1 | non_main | which features are hidden in the first cut at at least two things were in the xml that shouldn t be part of the hidden content so if we re thinking of the xml as specifying just the hidden components that s a mismatch budget and queries how much each query costs and how much budget we have but this may be simplified into just a unit cost which we re always burn asap amount of gaussian noise these may well need to be rest endpoints in the eventual version of for | 0 |
330,778 | 24,277,327,729 | IssuesEvent | 2022-09-28 14:43:01 | DMIT-2018/dmit-2018-sep-2022-a02-workbook-lavishbhardwaj | https://api.github.com/repos/DMIT-2018/dmit-2018-sep-2022-a02-workbook-lavishbhardwaj | opened | General Planning Implementation tasks of Managing Play List in Chinook | documentation | This task list area will be completed once the implementation plan has been outlined. This area is where one creates the task list that is associated with the milestone. The tasks are outlined in this area, are the tasks that are counted for the milestone. | 1.0 | General Planning Implementation tasks of Managing Play List in Chinook - This task list area will be completed once the implementation plan has been outlined. This area is where one creates the task list that is associated with the milestone. The tasks are outlined in this area, are the tasks that are counted for the milestone. | non_main | general planning implementation tasks of managing play list in chinook this task list area will be completed once the implementation plan has been outlined this area is where one creates the task list that is associated with the milestone the tasks are outlined in this area are the tasks that are counted for the milestone | 0 |
1,024 | 4,818,540,308 | IssuesEvent | 2016-11-04 16:38:17 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Add update_password option to os_user module | affects_2.1 cloud feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
os_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
The `os_user` module with a password specified for a user will always report 'changed'.
The conclusion of the bug report in #5183 was that in order to "fix" this we need to add another parameter like the on in the `user` module.
I.e a parameter called `update_password` that has options `on_create` or `always`.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: "Create test user"
os_user:
name: test
state: present
password: very-secret
default_project: a-existing-project
update_password: on_create
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
On first run, the user would be created and the password set.
On the second run, given that nothing changed, the task would say `ok`.
If the parameter would be `update_password: always` on the other hand, the module should always set the password and would always report `changed`
| True | Add update_password option to os_user module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
os_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
The `os_user` module with a password specified for a user will always report 'changed'.
The conclusion of the bug report in #5183 was that in order to "fix" this we need to add another parameter like the on in the `user` module.
I.e a parameter called `update_password` that has options `on_create` or `always`.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: "Create test user"
os_user:
name: test
state: present
password: very-secret
default_project: a-existing-project
update_password: on_create
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
On first run, the user would be created and the password set.
On the second run, given that nothing changed, the task would say `ok`.
If the parameter would be `update_password: always` on the other hand, the module should always set the password and would always report `changed`
| main | add update password option to os user module issue type feature idea component name os user ansible version ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary the os user module with a password specified for a user will always report changed the conclusion of the bug report in was that in order to fix this we need to add another parameter like the on in the user module i e a parameter called update password that has options on create or always steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create test user os user name test state present password very secret default project a existing project update password on create expected results on first run the user would be created and the password set on the second run given that nothing changed the task would say ok if the parameter would be update password always on the other hand the module should always set the password and would always report changed | 1 |
3,814 | 16,582,459,621 | IssuesEvent | 2021-05-31 13:41:23 | cloverhearts/quilljs-markdown | https://api.github.com/repos/cloverhearts/quilljs-markdown | closed | Webpack issue | Saw with Maintainer | Hi, I have an issue trying to run this in my project.
```
Module not found: Error: Can't resolve 'core-js/stable' in '/.../node_modules/quilljs-markdown/src'
```
Seems like there is some conflict in note modules. This is weird because I would expect each dependency to manage its own versions.
Any idea how could I solve this? | True | Webpack issue - Hi, I have an issue trying to run this in my project.
```
Module not found: Error: Can't resolve 'core-js/stable' in '/.../node_modules/quilljs-markdown/src'
```
Seems like there is some conflict in note modules. This is weird because I would expect each dependency to manage its own versions.
Any idea how could I solve this? | main | webpack issue hi i have an issue trying to run this in my project module not found error can t resolve core js stable in node modules quilljs markdown src seems like there is some conflict in note modules this is weird because i would expect each dependency to manage its own versions any idea how could i solve this | 1 |
2,207 | 7,802,930,957 | IssuesEvent | 2018-06-10 17:57:43 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | componentWillReceiveProps and componentDidUpdate unexpected behaviour on v2 | needs response from maintainer | ### Current Behaviour
When switching screens on a TabNavigator, React lifecycle hooks componentWillReceiveProps and componentDidUpdate of the screens are getting triggered when navigating in and out of them.
Navigating from A to B:
- CWRP and CDU from A are triggered.
- CWRP and CDU from B are triggered.
Navigating from B to A:
- CWRP and CDU from A are triggered.
- CWRP and CDU from B are triggered.
This behaviour was not present on v1.
### Expected Behavior
componentWillReceiveProps and componentDidUpdate should be triggered when setting params explicitly to a route.
### How to reproduce
- Have a simple TabNavigator with two react components as screens.
- Put componentWillReceiveProps and componentDidUpdate lifecycle hooks on each component.
- Navigate between them.
See code below:
```
import React, { Component } from "react";
import { StyleSheet, Text, View } from "react-native";
import { createBottomTabNavigator } from "react-navigation";
class ScreenA extends React.Component {
componentWillReceiveProps(nextProps) {
console.log("CWRP: Screen A");
}
componentDidUpdate(prevProps, prevState) {
console.log("CDU: Screen A");
}
render() {
return (
<View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}>
<Text>Screen A</Text>
</View>
);
}
}
class ScreenB extends React.Component {
componentWillReceiveProps(nextProps) {
console.log("CWRP: Screen B");
}
componentDidUpdate(prevProps, prevState) {
console.log("CDU: Screen B");
}
render() {
return (
<View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}>
<Text>Screen B</Text>
</View>
);
}
}
const RootStack = createBottomTabNavigator({
A: ScreenA,
B: ScreenB
});
export default class App extends Component {
render() {
return <RootStack />;
}
}
```
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.53.0
| True | componentWillReceiveProps and componentDidUpdate unexpected behaviour on v2 - ### Current Behaviour
When switching screens on a TabNavigator, React lifecycle hooks componentWillReceiveProps and componentDidUpdate of the screens are getting triggered when navigating in and out of them.
Navigating from A to B:
- CWRP and CDU from A are triggered.
- CWRP and CDU from B are triggered.
Navigating from B to A:
- CWRP and CDU from A are triggered.
- CWRP and CDU from B are triggered.
This behaviour was not present on v1.
### Expected Behavior
componentWillReceiveProps and componentDidUpdate should be triggered when setting params explicitly to a route.
### How to reproduce
- Have a simple TabNavigator with two react components as screens.
- Put componentWillReceiveProps and componentDidUpdate lifecycle hooks on each component.
- Navigate between them.
See code below:
```
import React, { Component } from "react";
import { StyleSheet, Text, View } from "react-native";
import { createBottomTabNavigator } from "react-navigation";
class ScreenA extends React.Component {
componentWillReceiveProps(nextProps) {
console.log("CWRP: Screen A");
}
componentDidUpdate(prevProps, prevState) {
console.log("CDU: Screen A");
}
render() {
return (
<View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}>
<Text>Screen A</Text>
</View>
);
}
}
class ScreenB extends React.Component {
componentWillReceiveProps(nextProps) {
console.log("CWRP: Screen B");
}
componentDidUpdate(prevProps, prevState) {
console.log("CDU: Screen B");
}
render() {
return (
<View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}>
<Text>Screen B</Text>
</View>
);
}
}
const RootStack = createBottomTabNavigator({
A: ScreenA,
B: ScreenB
});
export default class App extends Component {
render() {
return <RootStack />;
}
}
```
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.53.0
| main | componentwillreceiveprops and componentdidupdate unexpected behaviour on current behaviour when switching screens on a tabnavigator react lifecycle hooks componentwillreceiveprops and componentdidupdate of the screens are getting triggered when navigating in and out of them navigating from a to b cwrp and cdu from a are triggered cwrp and cdu from b are triggered navigating from b to a cwrp and cdu from a are triggered cwrp and cdu from b are triggered this behaviour was not present on expected behavior componentwillreceiveprops and componentdidupdate should be triggered when setting params explicitly to a route how to reproduce have a simple tabnavigator with two react components as screens put componentwillreceiveprops and componentdidupdate lifecycle hooks on each component navigate between them see code below import react component from react import stylesheet text view from react native import createbottomtabnavigator from react navigation class screena extends react component componentwillreceiveprops nextprops console log cwrp screen a componentdidupdate prevprops prevstate console log cdu screen a render return screen a class screenb extends react component componentwillreceiveprops nextprops console log cwrp screen b componentdidupdate prevprops prevstate console log cdu screen b render return screen b const rootstack createbottomtabnavigator a screena b screenb export default class app extends component render return your environment software version react navigation react native | 1 |
2,870 | 10,275,978,293 | IssuesEvent | 2019-08-24 13:16:10 | arcticicestudio/arctic | https://api.github.com/repos/arcticicestudio/arctic | opened | Prettier | context-workflow scope-dx scope-maintainability scope-quality type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63637792-4dcef380-c681-11e9-9252-f2fb22499985.png" width="30%" /></p>
Integrate [Prettier][], the opinionated code formatter with support for many languages and integrations with most editors. It ensures that all outputted code conforms to a consistent style.
### Configuration
This is one of the main features of Prettier: It already provides the best and recommended style configurations of-out-the-box™.
The only option we will change is the [print width][prettier-docs-pwidth]. It is set to 80 by default which not up-to-date for modern screens (might only be relevant when working in terminals only like e.g. with Vim). It'll be changed to 120 used by all of Arctic Ice Studio's style guides.
The `prettier.config.js` configuration file will be placed in the project root as well as the `.prettierignore` file to also define ignore pattern.
### ESLint Compatibility
To be fully compatible with ESLint, [eslint-plugin-prettier][gh-eslint-plugin-prettier] has already been included in #30 as well as the [set of recommended rules][gh-eslint-config-prettier] via the [`@arcticicestudio/eslint-config/prettier`][stg-js-esl#ep] and [`@arcticicestudio/eslint-config-typescript/prettier`][stg-js-esl-ts#ep] extension entry points.
### Package Script
To allow to format all sources a `format:pretty` package script will be added that'll also run in the main `format` script flow.
## Tasks
- [ ] Install [prettier][npm-prettier].
- [ ] Implement `prettier.config.js` configuration file.
- [ ] Implement `.prettierignore` ignore pattern file.
- [ ] Implement `format:pretty` package script and add to main `format` script flow.
- [ ] Format current code base for the first time and fix possible style guide violations using the configured linters of the project.
[eslint-docs-config-plugins]: https://eslint.org/docs/user-guide/configuring#configuring-plugins
[gh-eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[npm-eslint-plugin-prettier]: https://www.npmjs.com/package/eslint-plugin-prettier
[npm-prettier]: https://www.npmjs.com/package/prettier
[prettier-blog-1.15-mdx]: https://prettier.io/blog/2018/11/07/1.15.0.html#mdx
[prettier-docs-pwidth]: https://prettier.io/docs/en/options.html#print-width
[prettier]: https://prettier.io
[stg-js-esl-c#ep]: https://github.com/arcticicestudio/styleguide-javascript/blob/develop/packages/%40arcticicestudio/eslint-config/README.md#entry-points
[stg-js-esl-ts#ep]: https://github.com/arcticicestudio/styleguide-javascript/blob/develop/packages/%40arcticicestudio/eslint-config-typescript/README.md#entry-points
| True | Prettier - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63637792-4dcef380-c681-11e9-9252-f2fb22499985.png" width="30%" /></p>
Integrate [Prettier][], the opinionated code formatter with support for many languages and integrations with most editors. It ensures that all outputted code conforms to a consistent style.
### Configuration
This is one of the main features of Prettier: It already provides the best and recommended style configurations of-out-the-box™.
The only option we will change is the [print width][prettier-docs-pwidth]. It is set to 80 by default which not up-to-date for modern screens (might only be relevant when working in terminals only like e.g. with Vim). It'll be changed to 120 used by all of Arctic Ice Studio's style guides.
The `prettier.config.js` configuration file will be placed in the project root as well as the `.prettierignore` file to also define ignore pattern.
### ESLint Compatibility
To be fully compatible with ESLint, [eslint-plugin-prettier][gh-eslint-plugin-prettier] has already been included in #30 as well as the [set of recommended rules][gh-eslint-config-prettier] via the [`@arcticicestudio/eslint-config/prettier`][stg-js-esl#ep] and [`@arcticicestudio/eslint-config-typescript/prettier`][stg-js-esl-ts#ep] extension entry points.
### Package Script
To allow to format all sources a `format:pretty` package script will be added that'll also run in the main `format` script flow.
## Tasks
- [ ] Install [prettier][npm-prettier].
- [ ] Implement `prettier.config.js` configuration file.
- [ ] Implement `.prettierignore` ignore pattern file.
- [ ] Implement `format:pretty` package script and add to main `format` script flow.
- [ ] Format current code base for the first time and fix possible style guide violations using the configured linters of the project.
[eslint-docs-config-plugins]: https://eslint.org/docs/user-guide/configuring#configuring-plugins
[gh-eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[npm-eslint-plugin-prettier]: https://www.npmjs.com/package/eslint-plugin-prettier
[npm-prettier]: https://www.npmjs.com/package/prettier
[prettier-blog-1.15-mdx]: https://prettier.io/blog/2018/11/07/1.15.0.html#mdx
[prettier-docs-pwidth]: https://prettier.io/docs/en/options.html#print-width
[prettier]: https://prettier.io
[stg-js-esl-c#ep]: https://github.com/arcticicestudio/styleguide-javascript/blob/develop/packages/%40arcticicestudio/eslint-config/README.md#entry-points
[stg-js-esl-ts#ep]: https://github.com/arcticicestudio/styleguide-javascript/blob/develop/packages/%40arcticicestudio/eslint-config-typescript/README.md#entry-points
| main | prettier integrate the opinionated code formatter with support for many languages and integrations with most editors it ensures that all outputted code conforms to a consistent style configuration this is one of the main features of prettier it already provides the best and recommended style configurations of out the box™ the only option we will change is the it is set to by default which not up to date for modern screens might only be relevant when working in terminals only like e g with vim it ll be changed to used by all of arctic ice studio s style guides the prettier config js configuration file will be placed in the project root as well as the prettierignore file to also define ignore pattern eslint compatibility to be fully compatible with eslint has already been included in as well as the via the and extension entry points package script to allow to format all sources a format pretty package script will be added that ll also run in the main format script flow tasks install implement prettier config js configuration file implement prettierignore ignore pattern file implement format pretty package script and add to main format script flow format current code base for the first time and fix possible style guide violations using the configured linters of the project | 1 |
3,650 | 14,911,831,016 | IssuesEvent | 2021-01-22 11:41:47 | laminas/laminas-validator | https://api.github.com/repos/laminas/laminas-validator | opened | Release 2.14.1 is not mirrored on packagist | Awaiting Maintainer Response Bug | ### Bug Report
| Q | A
|------------ | ------
| Version(s) | 2.14.1
#### Summary
On https://packagist.org/packages/laminas/laminas-validator, release 2.14.1 is not visible.
I forced an update, and it states: "Last update: 2021-01-22 11:38:59 UTC ".
Unsure whether the problem lies on @laminas or Packagist. | True | Release 2.14.1 is not mirrored on packagist - ### Bug Report
| Q | A
|------------ | ------
| Version(s) | 2.14.1
#### Summary
On https://packagist.org/packages/laminas/laminas-validator, release 2.14.1 is not visible.
I forced an update, and it states: "Last update: 2021-01-22 11:38:59 UTC ".
Unsure whether the problem lies on @laminas or Packagist. | main | release is not mirrored on packagist bug report q a version s summary on release is not visible i forced an update and it states last update utc unsure whether the problem lies on laminas or packagist | 1 |
3,251 | 12,401,598,111 | IssuesEvent | 2020-05-21 10:11:31 | permon/permon | https://api.github.com/repos/permon/permon | opened | make use of PETSc MATPRODUCT | enhancement maintainability | We should abandon our own implementation in favor of PETSc one, which seems to be under intensive development now (https://gitlab.com/petsc/petsc/-/merge_requests/2800).
If there is anything useful on our side, it should be contributed to PETSc. | True | make use of PETSc MATPRODUCT - We should abandon our own implementation in favor of PETSc one, which seems to be under intensive development now (https://gitlab.com/petsc/petsc/-/merge_requests/2800).
If there is anything useful on our side, it should be contributed to PETSc. | main | make use of petsc matproduct we should abandon our own implementation in favor of petsc one which seems to be under intensive development now if there is anything useful on our side it should be contributed to petsc | 1 |
47,757 | 19,712,874,848 | IssuesEvent | 2022-01-13 08:02:09 | Azure/azure-sdk-for-go | https://api.github.com/repos/Azure/azure-sdk-for-go | closed | web.CertificatesClient.CreateOrUpdate always returns error due to not supporting 202 Accepted Rest Response | question Service Attention Mgmt customer-reported Web Apps needs-author-feedback no-recent-activity | ### Bug Report
Paths:
`/services/web/mgmt/2019-08-01/web/certificates.go`
`/services/web/mgmt/2020-06-01/web/certificates.go`
Version: v48.1
**- What happened?**
When attempting to CreateOrUpdate a Certificate the AutoRest client always (at least for managed certificates) returns a `202 Accepted` asynchronous "future creation" response, which the client errors on in the following code, specifically Line 127:
https://github.com/Azure/azure-sdk-for-go/blob/48b0fcf4b1044723e27a96a411a3d96b5ad54c6a/services/web/mgmt/2020-06-01/web/certificates.go#L122-L132
```
AzureRM Response for https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/adamtesttf2/providers/Microsoft.Web/certificates/adamtest.xxxxxxx.com?api-version=2020-06-01:
HTTP/2.0 202 Accepted
Content-Length: 0
Cache-Control: no-cache
Date: Thu, 19 Nov 2020 03:21:57 GMT
Expires: -1
Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/adamtesttf2/providers/Microsoft.Web/certificates/adamtest.xxxxxxx.com/operationresults/00000000-0000-0000-0000-000000000000?api-version=2020-06-01
Pragma: no-cache
Retry-After: 15
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Aspnet-Version: 4.0.30319
X-Content-Type-Options: nosniff
X-Ms-Correlation-Request-Id: 00000000-0000-0000-0000-000000000000
X-Ms-Ratelimit-Remaining-Subscription-Writes: 1199
X-Ms-Request-Id: 00000000-0000-0000-0000-000000000000
X-Ms-Routing-Request-Id: AUSTRALIAEAST:20201119T032158Z:00000000-0000-0000-0000-000000000000
X-Powered-By: ASP.NET
```
**- What did you expect or want to happen?**
Return either the created certificate or a `CreateOrUpdateFuture` handle.
**- How can we reproduce it?**
Create a managed certificate against a Premium App Service Plan Web App, which already has a bound custom domain name. Example code below (this also includes my work around to capture the 202 error response):
```go
client := meta.(*clients.Client).Web.CertificatesClient
//...
certificate := web.Certificate{
CertificateProperties: &web.CertificateProperties{
CanonicalName: utils.String(name),
ServerFarmID: utils.String(appServicePlanID),
Password: new(string),
},
Location: utils.String(location),
Tags: tags.Expand(t),
}
if _, err := client.CreateOrUpdate(ctx, resourceGroup, name, certificate); err != nil {
if !strings.Contains(err.Error(), "StatusCode=202") { // <--- this StatusCode=202 error always happens
return fmt.Errorf("Error creating/updating App Service Managed Certificate %q (Resource Group %q): %s", name, resourceGroup, err)
}
}
time.Sleep(30 * time.Second) // <- temporary workaround, but not reliable, as may be longer and could be well shorter
read, err := client.Get(ctx, resourceGroup, name)
```
I should be able to use the following pattern:
```go
createFuture, err := client.CreateOrUpdate(ctx, resourceGroup, name, certificate)
if err != nil { return err }
err = createFuture.WaitForCompletionRef(ctx, client.Client)
if err != nil { return err }
``` | 1.0 | web.CertificatesClient.CreateOrUpdate always returns error due to not supporting 202 Accepted Rest Response - ### Bug Report
Paths:
`/services/web/mgmt/2019-08-01/web/certificates.go`
`/services/web/mgmt/2020-06-01/web/certificates.go`
Version: v48.1
**- What happened?**
When attempting to CreateOrUpdate a Certificate the AutoRest client always (at least for managed certificates) returns a `202 Accepted` asynchronous "future creation" response, which the client errors on in the following code, specifically Line 127:
https://github.com/Azure/azure-sdk-for-go/blob/48b0fcf4b1044723e27a96a411a3d96b5ad54c6a/services/web/mgmt/2020-06-01/web/certificates.go#L122-L132
```
AzureRM Response for https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/adamtesttf2/providers/Microsoft.Web/certificates/adamtest.xxxxxxx.com?api-version=2020-06-01:
HTTP/2.0 202 Accepted
Content-Length: 0
Cache-Control: no-cache
Date: Thu, 19 Nov 2020 03:21:57 GMT
Expires: -1
Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/adamtesttf2/providers/Microsoft.Web/certificates/adamtest.xxxxxxx.com/operationresults/00000000-0000-0000-0000-000000000000?api-version=2020-06-01
Pragma: no-cache
Retry-After: 15
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Aspnet-Version: 4.0.30319
X-Content-Type-Options: nosniff
X-Ms-Correlation-Request-Id: 00000000-0000-0000-0000-000000000000
X-Ms-Ratelimit-Remaining-Subscription-Writes: 1199
X-Ms-Request-Id: 00000000-0000-0000-0000-000000000000
X-Ms-Routing-Request-Id: AUSTRALIAEAST:20201119T032158Z:00000000-0000-0000-0000-000000000000
X-Powered-By: ASP.NET
```
**- What did you expect or want to happen?**
Return either the created certificate or a `CreateOrUpdateFuture` handle.
**- How can we reproduce it?**
Create a managed certificate against a Premium App Service Plan Web App, which already has a bound custom domain name. Example code below (this also includes my work around to capture the 202 error response):
```go
client := meta.(*clients.Client).Web.CertificatesClient
//...
certificate := web.Certificate{
CertificateProperties: &web.CertificateProperties{
CanonicalName: utils.String(name),
ServerFarmID: utils.String(appServicePlanID),
Password: new(string),
},
Location: utils.String(location),
Tags: tags.Expand(t),
}
if _, err := client.CreateOrUpdate(ctx, resourceGroup, name, certificate); err != nil {
if !strings.Contains(err.Error(), "StatusCode=202") { // <--- this StatusCode=202 error always happens
return fmt.Errorf("Error creating/updating App Service Managed Certificate %q (Resource Group %q): %s", name, resourceGroup, err)
}
}
time.Sleep(30 * time.Second) // <- temporary workaround, but not reliable, as may be longer and could be well shorter
read, err := client.Get(ctx, resourceGroup, name)
```
I should be able to use the following pattern:
```go
createFuture, err := client.CreateOrUpdate(ctx, resourceGroup, name, certificate)
if err != nil { return err }
err = createFuture.WaitForCompletionRef(ctx, client.Client)
if err != nil { return err }
``` | non_main | web certificatesclient createorupdate always returns error due to not supporting accepted rest response bug report paths services web mgmt web certificates go services web mgmt web certificates go version what happened when attempting to createorupdate a certificate the autorest client always at least for managed certificates returns a accepted asynchronous future creation response which the client errors on in the following code specifically line azurerm response for http accepted content length cache control no cache date thu nov gmt expires location pragma no cache retry after server microsoft iis strict transport security max age includesubdomains x aspnet version x content type options nosniff x ms correlation request id x ms ratelimit remaining subscription writes x ms request id x ms routing request id australiaeast x powered by asp net what did you expect or want to happen return either the created certificate or a createorupdatefuture handle how can we reproduce it create a managed certificate against a premium app service plan web app which already has a bound custom domain name example code below this also includes my work around to capture the error response go client meta clients client web certificatesclient certificate web certificate certificateproperties web certificateproperties canonicalname utils string name serverfarmid utils string appserviceplanid password new string location utils string location tags tags expand t if err client createorupdate ctx resourcegroup name certificate err nil if strings contains err error statuscode this statuscode error always happens return fmt errorf error creating updating app service managed certificate q resource group q s name resourcegroup err time sleep time second temporary workaround but not reliable as may be longer and could be well shorter read err client get ctx resourcegroup name i should be able to use the following pattern go createfuture err client createorupdate ctx resourcegroup name certificate if err nil return err err createfuture waitforcompletionref ctx client client if err nil return err | 0 |
259 | 3,008,051,143 | IssuesEvent | 2015-07-27 19:12:15 | borisblizzard/arcreator | https://api.github.com/repos/borisblizzard/arcreator | opened | Move Project configuration files away form INI to JSON | Editor Related enhancement Maintainability | Currently project files look like this in INI format
Default Project.arcproj:
[Project]
Title=Default Project
[Files]
list=Actors|Classes|Skills|Items|Weapons|Armors|Enemies|Troops|States|Animations|Tilesets|CommonEvents|System|MapInfos
in order for internal consistency and the ability to add new features to the files easier the format should be changed to JSON.
{
"Project": {
"Title": "Default Project"
},
"Files": {
"list": ["Actors", "Classes", "Skills", "Items", "Weapons", "Armors", "Enemies", "Troops", "States", "Animations", "Tilesets", "CommonEvents", "System", "MapInfos"]
}
}
this effort shold coencide with the efforts in issue #32 to rework the project system to FileHandelers | True | Move Project configuration files away form INI to JSON - Currently project files look like this in INI format
Default Project.arcproj:
[Project]
Title=Default Project
[Files]
list=Actors|Classes|Skills|Items|Weapons|Armors|Enemies|Troops|States|Animations|Tilesets|CommonEvents|System|MapInfos
in order for internal consistency and the ability to add new features to the files easier the format should be changed to JSON.
{
"Project": {
"Title": "Default Project"
},
"Files": {
"list": ["Actors", "Classes", "Skills", "Items", "Weapons", "Armors", "Enemies", "Troops", "States", "Animations", "Tilesets", "CommonEvents", "System", "MapInfos"]
}
}
this effort shold coencide with the efforts in issue #32 to rework the project system to FileHandelers | main | move project configuration files away form ini to json currently project files look like this in ini format default project arcproj title default project list actors classes skills items weapons armors enemies troops states animations tilesets commonevents system mapinfos in order for internal consistency and the ability to add new features to the files easier the format should be changed to json project title default project files list this effort shold coencide with the efforts in issue to rework the project system to filehandelers | 1 |
154,112 | 24,249,762,177 | IssuesEvent | 2022-09-27 13:25:47 | EscolaDeSaudePublica/DesignLab | https://api.github.com/repos/EscolaDeSaudePublica/DesignLab | opened | Aula UFC | Formatação | Sem Projeto Definido Prioridade Design: Média | ## **Objetivo**
**Como** designer
**Quero** formatar a apresentação da Metodologia Felicilab
**Para** apresentar aos alunos da UFC
## **Contexto**
- Fomos convidados para uma palestra aos alunos do mestrado e doutorado do PPG da UFC e Mestrado Profissional em Saúde da Fiocruz. Assim, é preciso que a gente compreenda as expectativas do encontro para formatação da apresentação.
## **Escopo**
- [ ] Formatar apresentação
## Observações
Links de apoio e informações adicionais | 1.0 | Aula UFC | Formatação - ## **Objetivo**
**Como** designer
**Quero** formatar a apresentação da Metodologia Felicilab
**Para** apresentar aos alunos da UFC
## **Contexto**
- Fomos convidados para uma palestra aos alunos do mestrado e doutorado do PPG da UFC e Mestrado Profissional em Saúde da Fiocruz. Assim, é preciso que a gente compreenda as expectativas do encontro para formatação da apresentação.
## **Escopo**
- [ ] Formatar apresentação
## Observações
Links de apoio e informações adicionais | non_main | aula ufc formatação objetivo como designer quero formatar a apresentação da metodologia felicilab para apresentar aos alunos da ufc contexto fomos convidados para uma palestra aos alunos do mestrado e doutorado do ppg da ufc e mestrado profissional em saúde da fiocruz assim é preciso que a gente compreenda as expectativas do encontro para formatação da apresentação escopo formatar apresentação observações links de apoio e informações adicionais | 0 |
4,521 | 23,519,036,231 | IssuesEvent | 2022-08-19 02:26:17 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [BUG] 404 on Chrome, but not Firefox or Edge | 🐛 Bug 👤 Awaiting Maintainer Response | ### Environment
Self-Hosted (Docker)
### System
Chrome 104.0.5112.102
### Version
Dashy V-2.1.1
### Describe the problem
When loading dashy I get this error in browser console:
`Refused to apply style from 'https://dash.myhm.space/css/chunk-vendors.d8067ad8.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.`
which is then followed by a bunch of 404's for several other chunk-vendors file
this issue only occurs on Chrome, as I can load the site no problem on Firefox and Edged
attached are the logs from dashy
[_my-dashboard_logs.txt](https://github.com/Lissy93/dashy/files/9377496/_my-dashboard_logs.txt)
s
### Additional info
_No response_
### Please tick the boxes
- [X] You have explained the issue clearly, and included all relevant info
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide 
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | True | [BUG] 404 on Chrome, but not Firefox or Edge - ### Environment
Self-Hosted (Docker)
### System
Chrome 104.0.5112.102
### Version
Dashy V-2.1.1
### Describe the problem
When loading dashy I get this error in browser console:
`Refused to apply style from 'https://dash.myhm.space/css/chunk-vendors.d8067ad8.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.`
which is then followed by a bunch of 404's for several other chunk-vendors file
this issue only occurs on Chrome, as I can load the site no problem on Firefox and Edged
attached are the logs from dashy
[_my-dashboard_logs.txt](https://github.com/Lissy93/dashy/files/9377496/_my-dashboard_logs.txt)
s
### Additional info
_No response_
### Please tick the boxes
- [X] You have explained the issue clearly, and included all relevant info
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide 
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | main | on chrome but not firefox or edge environment self hosted docker system chrome version dashy v describe the problem when loading dashy i get this error in browser console refused to apply style from because its mime type text html is not a supported stylesheet mime type and strict mime checking is enabled which is then followed by a bunch of s for several other chunk vendors file this issue only occurs on chrome as i can load the site no problem on firefox and edged attached are the logs from dashy s additional info no response please tick the boxes you have explained the issue clearly and included all relevant info you are using a version of dashy you ve checked that this you ve checked the and guide you agree to the | 1 |
3,097 | 11,760,028,363 | IssuesEvent | 2020-03-13 18:31:54 | cloud-gov/cg-product | https://api.github.com/repos/cloud-gov/cg-product | closed | Ensure new RDS instances are getting the latest version available | compliance contractor-3-maintainability | In order to ensure customers won't be dinged for running a vulnerable database, we want customers to be using supported, up-to-date versions of RDS instances.
## Acceptance Criteria
- [ ] WHEN I look at the list of RDS plans on cloud.gov, THEN I see that the versions listed are recent and non-vulnerable versions.
- [ ] WHEN I create-service on any RDS plan, THEN I see that the version created matches the version listed in the plan.
## Implementation sketch
- Update the docs page
- Update the plans in the broker | True | Ensure new RDS instances are getting the latest version available - In order to ensure customers won't be dinged for running a vulnerable database, we want customers to be using supported, up-to-date versions of RDS instances.
## Acceptance Criteria
- [ ] WHEN I look at the list of RDS plans on cloud.gov, THEN I see that the versions listed are recent and non-vulnerable versions.
- [ ] WHEN I create-service on any RDS plan, THEN I see that the version created matches the version listed in the plan.
## Implementation sketch
- Update the docs page
- Update the plans in the broker | main | ensure new rds instances are getting the latest version available in order to ensure customers won t be dinged for running a vulnerable database we want customers to be using supported up to date versions of rds instances acceptance criteria when i look at the list of rds plans on cloud gov then i see that the versions listed are recent and non vulnerable versions when i create service on any rds plan then i see that the version created matches the version listed in the plan implementation sketch update the docs page update the plans in the broker | 1 |
3,671 | 15,023,388,093 | IssuesEvent | 2021-02-01 18:10:08 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | carbon-components-react/es/components/UIShell/HeaderGlobalAction.js does not export 'default' | status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 type: bug 🐛 | <!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## What package(s) are you using?
<!--
Add an x in one of the options below, for example:
- [x] package name
-->
- [ ] `carbon-components`
- [x] `carbon-components-react`
## Detailed description
> Describe in detail the issue you're having.
I'm using Parcel with --experimental-tree-hoisting (because the icons are clogging our package - #4847)
> Is this issue related to a specific component?
Possibly related to UIShell specifically, but it might be more general
> What did you expect to happen? What happened instead? What would you like to
> see changed?
Expected Parcel to bundle without errors.
Instead, received error: `🚨 ../node_modules/carbon-components-react/es/index.js does not export 'default'`
> What version of the Carbon Design System are you using?
7.27.0
## Steps to reproduce the issue
1. Create a project with Parcel
2. Run `parcel build --experimental-scope-hoisting` | True | carbon-components-react/es/components/UIShell/HeaderGlobalAction.js does not export 'default' - <!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## What package(s) are you using?
<!--
Add an x in one of the options below, for example:
- [x] package name
-->
- [ ] `carbon-components`
- [x] `carbon-components-react`
## Detailed description
> Describe in detail the issue you're having.
I'm using Parcel with --experimental-tree-hoisting (because the icons are clogging our package - #4847)
> Is this issue related to a specific component?
Possibly related to UIShell specifically, but it might be more general
> What did you expect to happen? What happened instead? What would you like to
> see changed?
Expected Parcel to bundle without errors.
Instead, received error: `🚨 ../node_modules/carbon-components-react/es/index.js does not export 'default'`
> What version of the Carbon Design System are you using?
7.27.0
## Steps to reproduce the issue
1. Create a project with Parcel
2. Run `parcel build --experimental-scope-hoisting` | main | carbon components react es components uishell headerglobalaction js does not export default feel free to remove sections that aren t relevant title line template brief description what package s are you using add an x in one of the options below for example package name carbon components carbon components react detailed description describe in detail the issue you re having i m using parcel with experimental tree hoisting because the icons are clogging our package is this issue related to a specific component possibly related to uishell specifically but it might be more general what did you expect to happen what happened instead what would you like to see changed expected parcel to bundle without errors instead received error 🚨 node modules carbon components react es index js does not export default what version of the carbon design system are you using steps to reproduce the issue create a project with parcel run parcel build experimental scope hoisting | 1 |
3,942 | 17,791,030,723 | IssuesEvent | 2021-08-31 16:11:53 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | opened | Application to join: alvinjohnsonso | Maintainer application | Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [X] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [X] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [X] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
tawk.to
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
No need for posting. I'm one of the maintainers of tawk.to's Drupal plugin.
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
The repo isn't under my account but under tawk.to's organization. https://github.com/tawk/tawk-backdrop
**OR (option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
<!-- example: @jenlampton -->
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES/no
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| True | Application to join: alvinjohnsonso - Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [X] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [X] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [X] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
tawk.to
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
No need for posting. I'm one of the maintainers of tawk.to's Drupal plugin.
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
The repo isn't under my account but under tawk.to's organization. https://github.com/tawk/tawk-backdrop
**OR (option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
<!-- example: @jenlampton -->
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES/no
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| main | application to join alvinjohnsonso hello and welcome to the contrib application process we re happy to have you please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal please provide the following information the name of your module theme or layout tawk to optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project no need for posting i m one of the maintainers of tawk to s drupal plugin post a link to your new backdrop project under your own github account option the repo isn t under my account but under tawk to s organization or option if you have already contributed code to backdrop core or contrib projects please provide links to pull requests or commits or option if you do not intend to contribute code but would like to update documentation manage issue queues etc please tag an existing contrib group member so they can post their recommendation if you have chosen option or above do you agree to the yes if you have chosen option above do you agree to undergo this same maintainer application process again should you decide to contribute code in the future yes no | 1 |
792 | 4,389,994,710 | IssuesEvent | 2016-08-09 00:42:21 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | unarchive failed to transfer on devel | bug_report P2 waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
ansible 2.2.0 (devel 134d70e7b9) last updated 2016/07/18 12:43:12 (GMT +1000)
lib/ansible/modules/core: (detached HEAD 7de287237f) last updated 2016/07/18 12:43:16 (GMT +1000)
lib/ansible/modules/extras: (detached HEAD 68ca157f3b) last updated 2016/07/18 12:43:16 (GMT +1000)
config file = /home/linus/Documents/ansible-playbooks/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
hostfile = ./inventory
remote_user = centos
host_key_checking = False
log_path = /var/log/ansible.log
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
management node: ubuntu 14.04
remote node: centos 7
##### SUMMARY
<!--- Explain the problem briefly -->
unarchive module shows error when src is an url using latest Ansible devel but works fine using the same playbook using Ansible stable 2.1.0.0 version.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
unarchive:
copy: no
src: https://someurl.tar.gz
dest: /tmp/somefolder
owner: user
group: user
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
changed
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true, "msg": "Source 'https://someurl.tar.gz' failed to transfer"}
<!--- Paste verbatim command output between quotes below -->
| True | unarchive failed to transfer on devel - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
ansible 2.2.0 (devel 134d70e7b9) last updated 2016/07/18 12:43:12 (GMT +1000)
lib/ansible/modules/core: (detached HEAD 7de287237f) last updated 2016/07/18 12:43:16 (GMT +1000)
lib/ansible/modules/extras: (detached HEAD 68ca157f3b) last updated 2016/07/18 12:43:16 (GMT +1000)
config file = /home/linus/Documents/ansible-playbooks/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
hostfile = ./inventory
remote_user = centos
host_key_checking = False
log_path = /var/log/ansible.log
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
management node: ubuntu 14.04
remote node: centos 7
##### SUMMARY
<!--- Explain the problem briefly -->
unarchive module shows error when src is an url using latest Ansible devel but works fine using the same playbook using Ansible stable 2.1.0.0 version.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
unarchive:
copy: no
src: https://someurl.tar.gz
dest: /tmp/somefolder
owner: user
group: user
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
changed
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true, "msg": "Source 'https://someurl.tar.gz' failed to transfer"}
<!--- Paste verbatim command output between quotes below -->
| main | unarchive failed to transfer on devel issue type bug report component name unarchive ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home linus documents ansible playbooks ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible managed ansible managed file modified on y m d h m s by uid on host hostfile inventory remote user centos host key checking false log path var log ansible log os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific management node ubuntu remote node centos summary unarchive module shows error when src is an url using latest ansible devel but works fine using the same playbook using ansible stable version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used unarchive copy no src dest tmp somefolder owner user group user expected results changed actual results fatal failed changed false failed true msg source failed to transfer | 1 |
3,210 | 12,260,951,811 | IssuesEvent | 2020-05-06 19:12:06 | LightForm-group/matflow | https://api.github.com/repos/LightForm-group/matflow | opened | Break up the mega-function `get_element_idx` into more-unit-testable chunks | maintainability | In `matflow.models.construction`. | True | Break up the mega-function `get_element_idx` into more-unit-testable chunks - In `matflow.models.construction`. | main | break up the mega function get element idx into more unit testable chunks in matflow models construction | 1 |
470,934 | 13,549,865,541 | IssuesEvent | 2020-09-17 08:48:54 | kubernetes-sigs/metrics-server | https://api.github.com/repos/kubernetes-sigs/metrics-server | closed | x509: certificate signed by unknown authority (--kubelet-insecure-tls didn't help) | lifecycle/rotten priority/awaiting-more-evidence triage/support | Hello, on baremetall Kubernetes version 1.16.2, I'm trying install metrics-server v0.3.6 and get these errors in metrics-server logs
`
E0320 08:04:20.914963 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "front-proxy-ca"), x509: certificate signed by unknown authority]
`
--kubelet-insecure-tls parameter didn't help
<!-- DO NOT EDIT BELOW THIS LINE -->
/king bug | 1.0 | x509: certificate signed by unknown authority (--kubelet-insecure-tls didn't help) - Hello, on baremetall Kubernetes version 1.16.2, I'm trying install metrics-server v0.3.6 and get these errors in metrics-server logs
`
E0320 08:04:20.914963 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "front-proxy-ca"), x509: certificate signed by unknown authority]
`
--kubelet-insecure-tls parameter didn't help
<!-- DO NOT EDIT BELOW THIS LINE -->
/king bug | non_main | certificate signed by unknown authority kubelet insecure tls didn t help hello on baremetall kubernetes version i m trying install metrics server and get these errors in metrics server logs authentication go unable to authenticate the request due to an error kubelet insecure tls parameter didn t help king bug | 0 |
5,074 | 25,960,183,783 | IssuesEvent | 2022-12-18 19:59:46 | cran-task-views/Hydrology | https://api.github.com/repos/cran-task-views/Hydrology | closed | Package 'metScanR' has been archived on CRAN for more than 60 days | maintainer-contacted | Package [metScanR](https://CRAN.R-project.org/package=metScanR) is currently listed in CRAN Task View [Hydrology](https://CRAN.R-project.org/view=Hydrology) but the package has actually been archived for more than 60 days on CRAN. Often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view.
Alternatively, you might also consider reaching out to the authors of the package and encourage (or even help) them to bring the package back to CRAN.
In any case, the situation should be resolved in the next four weeks. If the package does not seem to be brought back to CRAN, please exclude it from the task view. | True | Package 'metScanR' has been archived on CRAN for more than 60 days - Package [metScanR](https://CRAN.R-project.org/package=metScanR) is currently listed in CRAN Task View [Hydrology](https://CRAN.R-project.org/view=Hydrology) but the package has actually been archived for more than 60 days on CRAN. Often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view.
Alternatively, you might also consider reaching out to the authors of the package and encourage (or even help) them to bring the package back to CRAN.
In any case, the situation should be resolved in the next four weeks. If the package does not seem to be brought back to CRAN, please exclude it from the task view. | main | package metscanr has been archived on cran for more than days package is currently listed in cran task view but the package has actually been archived for more than days on cran often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view alternatively you might also consider reaching out to the authors of the package and encourage or even help them to bring the package back to cran in any case the situation should be resolved in the next four weeks if the package does not seem to be brought back to cran please exclude it from the task view | 1 |
647 | 4,160,081,792 | IssuesEvent | 2016-06-17 11:49:44 | Particular/NServiceBus.SqlServer | https://api.github.com/repos/Particular/NServiceBus.SqlServer | opened | Canonical representation of queue address within the transport | Tag: Maintainer Prio Type: Refactoring | The queue table address consists of a table name and a schema name. As it stands today, both components are passed around in quoted or unquoted form, depending on requirements.
The goal of this refactoring is to settle on a canonical representation of the queue table address within the transport and be explicit about formatting it properly on the edges of the transport, e.g. when creating SQL statements or passing the address to connection factory callbacks, etc.
/cc @tmasternak | True | Canonical representation of queue address within the transport - The queue table address consists of a table name and a schema name. As it stands today, both components are passed around in quoted or unquoted form, depending on requirements.
The goal of this refactoring is to settle on a canonical representation of the queue table address within the transport and be explicit about formatting it properly on the edges of the transport, e.g. when creating SQL statements or passing the address to connection factory callbacks, etc.
/cc @tmasternak | main | canonical representation of queue address within the transport the queue table address consists of a table name and a schema name as it stands today both components are passed around in quoted or unquoted form depending on requirements the goal of this refactoring is to settle on a canonical representation of the queue table address within the transport and be explicit about formatting it properly on the edges of the transport e g when creating sql statements or passing the address to connection factory callbacks etc cc tmasternak | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.