Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,340
| 2,610,091,940
|
IssuesEvent
|
2015-02-26 18:27:47
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳粉刺祛除方法
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳粉刺祛除方法【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:53
|
1.0
|
深圳粉刺祛除方法 - ```
深圳粉刺祛除方法【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:53
|
defect
|
深圳粉刺祛除方法 深圳粉刺祛除方法【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
| 1
|
56,473
| 15,107,876,192
|
IssuesEvent
|
2021-02-08 15:55:42
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Kernle panic on list_del corruption, arc_prune
|
Status: Triage Needed Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | CentOS
Distribution Version | 7.8
Linux Kernel | 3.10.0-1127.19.1.el7.x86_64
Architecture |
ZFS Version | v0.8.5-1
SPL Version | v0.8.5-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Kernel panic on zfs causes system lockup
### Describe how to reproduce the problem
Not sure but attaching full stack trace. System locked up and needed a reboot.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
Feb 6 04:11:34 kernel: [1196532.369002] ------------[ cut here ]------------
Feb 6 04:11:34 kernel: [1196532.369022] WARNING: CPU: 10 PID: 53557 at lib/list_debug.c:53 __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369027] list_del corruption, ffff947ed76217d0->next is LIST_POISON1 (dead000000000100)
Feb 6 04:11:34 kernel: [1196532.369031] Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: [1196532.369157] CPU: 10 PID: 53557 Comm: arc_prune Tainted: P OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: [1196532.369160] Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: [1196532.369163] Call Trace:
Feb 6 04:11:34 kernel: [1196532.369178] [<ffffffffb1d7ffa5>] dump_stack+0x19/0x1b
Feb 6 04:11:34 kernel: ------------[ cut here ]------------
Feb 6 04:11:34 kernel: [1196532.369187] [<ffffffffb169bd18>] __warn+0xd8/0x100
Feb 6 04:11:34 kernel: [1196532.369193] [<ffffffffb169bd9f>] warn_slowpath_fmt+0x5f/0x80
Feb 6 04:11:34 kernel: [1196532.369209] [<ffffffffc06f3820>] ? spl_kmem_zalloc+0xe0/0x140 [spl]
Feb 6 04:11:34 kernel: [1196532.369214] [<ffffffffb19a4d63>] __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369222] [<ffffffffb186640f>] __dentry_kill+0x7f/0x1d0
Feb 6 04:11:34 kernel: WARNING: CPU: 10 PID: 53557 at lib/list_debug.c:53 __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369226] [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [1196532.369231] [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [1196532.369327] [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [1196532.369407] [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [1196532.369457] [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [1196532.369471] [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [1196532.369480] [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [1196532.369493] [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [1196532.369500] [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [1196532.369505] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.369513] [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [1196532.369517] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.369521] ---[ end trace 6653de515802d12d ]---
Feb 6 04:11:34 kernel: list_del corruption, ffff947ed76217d0->next is LIST_POISON1 (dead000000000100)
Feb 6 04:11:34 kernel: [1196532.369537] general protection fault: 0000 [#1] SMP
Feb 6 04:11:34 kernel: [1196532.374750] Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: [1196532.473853] CPU: 10 PID: 53557 Comm: arc_prune Tainted: P W OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: [1196532.484667] Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: [1196532.492617] task: ffff948792188000 ti: ffff948aca928000 task.ti: ffff948aca928000
Feb 6 04:11:34 kernel: [1196532.500591] RIP: 0010:[<ffffffffb1866423>] [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: [1196532.509293] RSP: 0018:ffff948aca92bc28 EFLAGS: 00010286
Feb 6 04:11:34 kernel: [1196532.514948] RAX: dead000000000100 RBX: ffff947ed7621740 RCX: 0000000000000006
Feb 6 04:11:34 kernel: [1196532.522573] RDX: 00000000000000a0 RSI: 0000000000000000 RDI: 0000000000000009
Feb 6 04:11:34 kernel: [1196532.530199] RBP: ffff948aca92bc40 R08: 000000000000000a R09: 0000000000000000
Feb 6 04:11:34 kernel: [1196532.537827] R10: 0000000000000bba R11: ffff948aca92b85e R12: 0000000000000000
Feb 6 04:11:34 kernel: [1196532.545446] R13: ffff947ed7621798 R14: 0000000000000000 R15: ffff94804f118000
Feb 6 04:11:34 kernel: [1196532.553066] FS: 00007f21b55ec880(0000) GS:ffff94844fc80000(0000) knlGS:0000000000000000
Feb 6 04:11:34 kernel: [1196532.561649] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 6 04:11:34 kernel: [1196532.567741] CR2: 00007f21b51c8120 CR3: 00000016a298e000 CR4: 00000000003407e0
Feb 6 04:11:34 kernel: [1196532.575369] Call Trace:
Feb 6 04:11:34 kernel: [1196532.578163] [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [1196532.583208] [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [1196532.589230] [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [1196532.595390] [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [1196532.601613] [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [1196532.607978] [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [1196532.614337] [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [1196532.620261] [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [1196532.627136] [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [1196532.632354] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.638797] [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [1196532.645590] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.652028] Code: 10 00 48 8d bb 90 00 00 00 48 3b bb 90 00 00 00 74 26 e8 f1 e8 13 00 48 8b 83 90 00 00 00 49 8d 94 24 a0 00 00 00 48 39 d0 74 0d <f6> 80 73 ff ff ff 20 0f 85 11 01 00 00 4d 85 e4 74 0c 49 8d 7c
Feb 6 04:11:34 kernel: [1196532.673176] RIP [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: [1196532.679383] RSP <ffff948aca92bc28>
Feb 6 04:11:34 kernel: [1196532.683708] ---[ end trace 6653de515802d12e ]---
Feb 6 04:11:34 kernel: Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops
Feb 6 04:11:34 kernel: drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: CPU: 10 PID: 53557 Comm: arc_prune Tainted: P OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: Call Trace:
Feb 6 04:11:34 kernel: [<ffffffffb1d7ffa5>] dump_stack+0x19/0x1b
Feb 6 04:11:34 kernel: [<ffffffffb169bd18>] __warn+0xd8/0x100
Feb 6 04:11:34 kernel: [<ffffffffb169bd9f>] warn_slowpath_fmt+0x5f/0x80
Feb 6 04:11:34 kernel: [<ffffffffc06f3820>] ? spl_kmem_zalloc+0xe0/0x140 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb19a4d63>] __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [<ffffffffb186640f>] __dentry_kill+0x7f/0x1d0
Feb 6 04:11:34 kernel: [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: ---[ end trace 6653de515802d12d ]---
Feb 6 04:11:34 kernel: general protection fault: 0000 [#1] SMP
Feb 6 04:11:34 kernel: Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops
Feb 6 04:11:34 kernel: drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: CPU: 10 PID: 53557 Comm: arc_prune Tainted: P W OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: task: ffff948792188000 ti: ffff948aca928000 task.ti: ffff948aca928000
Feb 6 04:11:34 kernel: RIP: 0010:[<ffffffffb1866423>] [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: RSP: 0018:ffff948aca92bc28 EFLAGS: 00010286
Feb 6 04:11:34 kernel: RAX: dead000000000100 RBX: ffff947ed7621740 RCX: 0000000000000006
Feb 6 04:11:34 kernel: RDX: 00000000000000a0 RSI: 0000000000000000 RDI: 0000000000000009
Feb 6 04:11:34 kernel: RBP: ffff948aca92bc40 R08: 000000000000000a R09: 0000000000000000
Feb 6 04:11:34 kernel: R10: 0000000000000bba R11: ffff948aca92b85e R12: 0000000000000000
Feb 6 04:11:34 kernel: R13: ffff947ed7621798 R14: 0000000000000000 R15: ffff94804f118000
Feb 6 04:11:34 kernel: FS: 00007f21b55ec880(0000) GS:ffff94844fc80000(0000) knlGS:0000000000000000
Feb 6 04:11:34 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 6 04:11:34 kernel: CR2: 00007f21b51c8120 CR3: 00000016a298e000 CR4: 00000000003407e0
Feb 6 04:11:34 kernel: Call Trace:
Feb 6 04:11:34 kernel: [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: Code: 10 00 48 8d bb 90 00 00 00 48 3b bb 90 00 00 00 74 26 e8 f1 e8 13 00 48 8b 83 90 00 00 00 49 8d 94 24 a0 00 00 00 48 39 d0 74 0d <f6> 80 73 ff ff ff 20 0f 85 11 01 00 00 4d 85 e4 74 0c 49 8d 7c
Feb 6 04:11:34 kernel: RIP [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: RSP <ffff948aca92bc28>
Feb 6 04:11:34 kernel: ---[ end trace 6653de515802d12e ]---
|
1.0
|
Kernle panic on list_del corruption, arc_prune - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | CentOS
Distribution Version | 7.8
Linux Kernel | 3.10.0-1127.19.1.el7.x86_64
Architecture |
ZFS Version | v0.8.5-1
SPL Version | v0.8.5-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Kernel panic on zfs causes system lockup
### Describe how to reproduce the problem
Not sure but attaching full stack trace. System locked up and needed a reboot.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
Feb 6 04:11:34 kernel: [1196532.369002] ------------[ cut here ]------------
Feb 6 04:11:34 kernel: [1196532.369022] WARNING: CPU: 10 PID: 53557 at lib/list_debug.c:53 __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369027] list_del corruption, ffff947ed76217d0->next is LIST_POISON1 (dead000000000100)
Feb 6 04:11:34 kernel: [1196532.369031] Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: [1196532.369157] CPU: 10 PID: 53557 Comm: arc_prune Tainted: P OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: [1196532.369160] Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: [1196532.369163] Call Trace:
Feb 6 04:11:34 kernel: [1196532.369178] [<ffffffffb1d7ffa5>] dump_stack+0x19/0x1b
Feb 6 04:11:34 kernel: ------------[ cut here ]------------
Feb 6 04:11:34 kernel: [1196532.369187] [<ffffffffb169bd18>] __warn+0xd8/0x100
Feb 6 04:11:34 kernel: [1196532.369193] [<ffffffffb169bd9f>] warn_slowpath_fmt+0x5f/0x80
Feb 6 04:11:34 kernel: [1196532.369209] [<ffffffffc06f3820>] ? spl_kmem_zalloc+0xe0/0x140 [spl]
Feb 6 04:11:34 kernel: [1196532.369214] [<ffffffffb19a4d63>] __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369222] [<ffffffffb186640f>] __dentry_kill+0x7f/0x1d0
Feb 6 04:11:34 kernel: WARNING: CPU: 10 PID: 53557 at lib/list_debug.c:53 __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [1196532.369226] [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [1196532.369231] [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [1196532.369327] [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [1196532.369407] [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [1196532.369457] [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [1196532.369471] [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [1196532.369480] [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [1196532.369493] [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [1196532.369500] [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [1196532.369505] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.369513] [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [1196532.369517] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.369521] ---[ end trace 6653de515802d12d ]---
Feb 6 04:11:34 kernel: list_del corruption, ffff947ed76217d0->next is LIST_POISON1 (dead000000000100)
Feb 6 04:11:34 kernel: [1196532.369537] general protection fault: 0000 [#1] SMP
Feb 6 04:11:34 kernel: [1196532.374750] Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: [1196532.473853] CPU: 10 PID: 53557 Comm: arc_prune Tainted: P W OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: [1196532.484667] Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: [1196532.492617] task: ffff948792188000 ti: ffff948aca928000 task.ti: ffff948aca928000
Feb 6 04:11:34 kernel: [1196532.500591] RIP: 0010:[<ffffffffb1866423>] [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: [1196532.509293] RSP: 0018:ffff948aca92bc28 EFLAGS: 00010286
Feb 6 04:11:34 kernel: [1196532.514948] RAX: dead000000000100 RBX: ffff947ed7621740 RCX: 0000000000000006
Feb 6 04:11:34 kernel: [1196532.522573] RDX: 00000000000000a0 RSI: 0000000000000000 RDI: 0000000000000009
Feb 6 04:11:34 kernel: [1196532.530199] RBP: ffff948aca92bc40 R08: 000000000000000a R09: 0000000000000000
Feb 6 04:11:34 kernel: [1196532.537827] R10: 0000000000000bba R11: ffff948aca92b85e R12: 0000000000000000
Feb 6 04:11:34 kernel: [1196532.545446] R13: ffff947ed7621798 R14: 0000000000000000 R15: ffff94804f118000
Feb 6 04:11:34 kernel: [1196532.553066] FS: 00007f21b55ec880(0000) GS:ffff94844fc80000(0000) knlGS:0000000000000000
Feb 6 04:11:34 kernel: [1196532.561649] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 6 04:11:34 kernel: [1196532.567741] CR2: 00007f21b51c8120 CR3: 00000016a298e000 CR4: 00000000003407e0
Feb 6 04:11:34 kernel: [1196532.575369] Call Trace:
Feb 6 04:11:34 kernel: [1196532.578163] [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [1196532.583208] [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [1196532.589230] [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [1196532.595390] [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [1196532.601613] [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [1196532.607978] [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [1196532.614337] [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [1196532.620261] [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [1196532.627136] [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [1196532.632354] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.638797] [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [1196532.645590] [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [1196532.652028] Code: 10 00 48 8d bb 90 00 00 00 48 3b bb 90 00 00 00 74 26 e8 f1 e8 13 00 48 8b 83 90 00 00 00 49 8d 94 24 a0 00 00 00 48 39 d0 74 0d <f6> 80 73 ff ff ff 20 0f 85 11 01 00 00 4d 85 e4 74 0c 49 8d 7c
Feb 6 04:11:34 kernel: [1196532.673176] RIP [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: [1196532.679383] RSP <ffff948aca92bc28>
Feb 6 04:11:34 kernel: [1196532.683708] ---[ end trace 6653de515802d12e ]---
Feb 6 04:11:34 kernel: Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops
Feb 6 04:11:34 kernel: drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: CPU: 10 PID: 53557 Comm: arc_prune Tainted: P OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: Call Trace:
Feb 6 04:11:34 kernel: [<ffffffffb1d7ffa5>] dump_stack+0x19/0x1b
Feb 6 04:11:34 kernel: [<ffffffffb169bd18>] __warn+0xd8/0x100
Feb 6 04:11:34 kernel: [<ffffffffb169bd9f>] warn_slowpath_fmt+0x5f/0x80
Feb 6 04:11:34 kernel: [<ffffffffc06f3820>] ? spl_kmem_zalloc+0xe0/0x140 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb19a4d63>] __list_del_entry+0x63/0xd0
Feb 6 04:11:34 kernel: [<ffffffffb186640f>] __dentry_kill+0x7f/0x1d0
Feb 6 04:11:34 kernel: [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: ---[ end trace 6653de515802d12d ]---
Feb 6 04:11:34 kernel: general protection fault: 0000 [#1] SMP
Feb 6 04:11:34 kernel: Modules linked in: nfsv3 nfsd nfs_acl mgc(OE) lustre(OE) lmv(OE) mdc(OE) fid(OE) osc(OE) lov(OE) fld(OE) ptlrpc(OE) obdclass(OE) crct10dif_generic ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack libcrc32c iptable_filter sch_fq tcp_htcp zfs(POE) zunicode(POE) zlua(POE) zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc dm_mirror dm_region_hash dm_log dm_mod sg amd64_edac_mod edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ttm joydev ahci drm_kms_helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb_sys_fops
Feb 6 04:11:34 kernel: drm ccp k10temp drm_panel_orientation_quirks i2c_piix4 nfit ipmi_si ipmi_devintf libnvdimm ipmi_msghandler pinctrl_amd i2c_designware_platform i2c_designware_core acpi_cpufreq binfmt_misc ip_tables i40e igb i2c_algo_bit dca ptp pps_core bnxt_en devlink sd_mod crc_t10dif crct10dif_common
Feb 6 04:11:34 kernel: CPU: 10 PID: 53557 Comm: arc_prune Tainted: P W OE ------------ 3.10.0-1127.19.1.el7.x86_64 #1
Feb 6 04:11:34 kernel: Hardware name: Supermicro AS -2023US-TR4/H11DSU-iN, BIOS 1.1c 10/04/2018
Feb 6 04:11:34 kernel: task: ffff948792188000 ti: ffff948aca928000 task.ti: ffff948aca928000
Feb 6 04:11:34 kernel: RIP: 0010:[<ffffffffb1866423>] [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: RSP: 0018:ffff948aca92bc28 EFLAGS: 00010286
Feb 6 04:11:34 kernel: RAX: dead000000000100 RBX: ffff947ed7621740 RCX: 0000000000000006
Feb 6 04:11:34 kernel: RDX: 00000000000000a0 RSI: 0000000000000000 RDI: 0000000000000009
Feb 6 04:11:34 kernel: RBP: ffff948aca92bc40 R08: 000000000000000a R09: 0000000000000000
Feb 6 04:11:34 kernel: R10: 0000000000000bba R11: ffff948aca92b85e R12: 0000000000000000
Feb 6 04:11:34 kernel: R13: ffff947ed7621798 R14: 0000000000000000 R15: ffff94804f118000
Feb 6 04:11:34 kernel: FS: 00007f21b55ec880(0000) GS:ffff94844fc80000(0000) knlGS:0000000000000000
Feb 6 04:11:34 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 6 04:11:34 kernel: CR2: 00007f21b51c8120 CR3: 00000016a298e000 CR4: 00000000003407e0
Feb 6 04:11:34 kernel: Call Trace:
Feb 6 04:11:34 kernel: [<ffffffffb1866b85>] dput+0xb5/0x1a0
Feb 6 04:11:34 kernel: [<ffffffffb1866f96>] d_prune_aliases+0xb6/0xf0
Feb 6 04:11:34 kernel: [<ffffffffc11a28b3>] zfs_prune+0x253/0x2a0 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc11d02f5>] zpl_prune_sb+0x35/0x50 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc10b97b2>] arc_prune_task+0x22/0x40 [zfs]
Feb 6 04:11:34 kernel: [<ffffffffc06f8aac>] taskq_thread+0x2ac/0x4f0 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16db990>] ? wake_up_state+0x20/0x20
Feb 6 04:11:34 kernel: [<ffffffffc06f8800>] ? taskq_thread_spawn+0x60/0x60 [spl]
Feb 6 04:11:34 kernel: [<ffffffffb16c6691>] kthread+0xd1/0xe0
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: [<ffffffffb1d92d24>] ret_from_fork_nospec_begin+0xe/0x21
Feb 6 04:11:34 kernel: [<ffffffffb16c65c0>] ? insert_kthread_work+0x40/0x40
Feb 6 04:11:34 kernel: Code: 10 00 48 8d bb 90 00 00 00 48 3b bb 90 00 00 00 74 26 e8 f1 e8 13 00 48 8b 83 90 00 00 00 49 8d 94 24 a0 00 00 00 48 39 d0 74 0d <f6> 80 73 ff ff ff 20 0f 85 11 01 00 00 4d 85 e4 74 0c 49 8d 7c
Feb 6 04:11:34 kernel: RIP [<ffffffffb1866423>] __dentry_kill+0x93/0x1d0
Feb 6 04:11:34 kernel: RSP <ffff948aca92bc28>
Feb 6 04:11:34 kernel: ---[ end trace 6653de515802d12e ]---
|
defect
|
kernle panic on list del corruption arc prune thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name centos distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing kernel panic on zfs causes system lockup describe how to reproduce the problem not sure but attaching full stack trace system locked up and needed a reboot include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with feb kernel feb kernel warning cpu pid at lib list debug c list del entry feb kernel list del corruption next is list feb kernel modules linked in nfsd nfs acl mgc oe lustre oe lmv oe mdc oe fid oe osc oe lov oe fld oe ptlrpc oe obdclass oe generic ksocklnd oe lnet oe libcfs oe rpcsec gss auth rpcgss dns resolver nfs lockd grace fscache garp mrp stp llc ipt reject nf reject nf conntrack nf defrag xt conntrack nf conntrack iptable filter sch fq tcp htcp zfs poe zunicode poe zlua poe zcommon poe znvpair poe zavl poe icp poe spl oe sunrpc dm mirror dm region hash dm log dm mod sg edac mod edac mce amd kvm amd kvm irqbypass pclmul pclmul intel ghash clmulni intel aesni intel lrw glue helper ablk helper cryptd ttm joydev ahci drm kms helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb sys fops drm ccp drm panel orientation quirks nfit ipmi si ipmi devintf libnvdimm ipmi msghandler pinctrl amd designware platform designware core acpi cpufreq binfmt misc ip tables igb algo bit dca ptp pps core bnxt en devlink sd mod crc common feb kernel cpu pid comm arc prune tainted p oe feb kernel hardware name supermicro as in bios feb kernel call trace feb kernel dump stack feb kernel feb kernel warn feb kernel warn slowpath fmt feb kernel spl kmem zalloc feb kernel list del entry feb kernel dentry kill feb kernel warning cpu pid at lib list debug c list del entry feb kernel dput feb kernel d prune aliases feb kernel zfs prune feb kernel zpl prune sb feb kernel arc prune task feb kernel taskq thread feb kernel wake up state feb kernel taskq thread spawn feb kernel kthread feb kernel insert kthread work feb kernel ret from fork nospec begin feb kernel insert kthread work feb kernel feb kernel list del corruption next is list feb kernel general protection fault smp feb kernel modules linked in nfsd nfs acl mgc oe lustre oe lmv oe mdc oe fid oe osc oe lov oe fld oe ptlrpc oe obdclass oe generic ksocklnd oe lnet oe libcfs oe rpcsec gss auth rpcgss dns resolver nfs lockd grace fscache garp mrp stp llc ipt reject nf reject nf conntrack nf defrag xt conntrack nf conntrack iptable filter sch fq tcp htcp zfs poe zunicode poe zlua poe zcommon poe znvpair poe zavl poe icp poe spl oe sunrpc dm mirror dm region hash dm log dm mod sg edac mod edac mce amd kvm amd kvm irqbypass pclmul pclmul intel ghash clmulni intel aesni intel lrw glue helper ablk helper cryptd ttm joydev ahci drm kms helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb sys fops drm ccp drm panel orientation quirks nfit ipmi si ipmi devintf libnvdimm ipmi msghandler pinctrl amd designware platform designware core acpi cpufreq binfmt misc ip tables igb algo bit dca ptp pps core bnxt en devlink sd mod crc common feb kernel cpu pid comm arc prune tainted p w oe feb kernel hardware name supermicro as in bios feb kernel task ti task ti feb kernel rip dentry kill feb kernel rsp eflags feb kernel rax rbx rcx feb kernel rdx rsi rdi feb kernel rbp feb kernel feb kernel feb kernel fs gs knlgs feb kernel cs ds es feb kernel feb kernel call trace feb kernel dput feb kernel d prune aliases feb kernel zfs prune feb kernel zpl prune sb feb kernel arc prune task feb kernel taskq thread feb kernel wake up state feb kernel taskq thread spawn feb kernel kthread feb kernel insert kthread work feb kernel ret from fork nospec begin feb kernel insert kthread work feb kernel code bb bb ff ff ff feb kernel rip dentry kill feb kernel rsp feb kernel feb kernel modules linked in nfsd nfs acl mgc oe lustre oe lmv oe mdc oe fid oe osc oe lov oe fld oe ptlrpc oe obdclass oe generic ksocklnd oe lnet oe libcfs oe rpcsec gss auth rpcgss dns resolver nfs lockd grace fscache garp mrp stp llc ipt reject nf reject nf conntrack nf defrag xt conntrack nf conntrack iptable filter sch fq tcp htcp zfs poe zunicode poe zlua poe zcommon poe znvpair poe zavl poe icp poe spl oe sunrpc dm mirror dm region hash dm log dm mod sg edac mod edac mce amd kvm amd kvm irqbypass pclmul pclmul intel ghash clmulni intel aesni intel lrw glue helper ablk helper cryptd ttm joydev ahci drm kms helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb sys fops feb kernel drm ccp drm panel orientation quirks nfit ipmi si ipmi devintf libnvdimm ipmi msghandler pinctrl amd designware platform designware core acpi cpufreq binfmt misc ip tables igb algo bit dca ptp pps core bnxt en devlink sd mod crc common feb kernel cpu pid comm arc prune tainted p oe feb kernel hardware name supermicro as in bios feb kernel call trace feb kernel dump stack feb kernel warn feb kernel warn slowpath fmt feb kernel spl kmem zalloc feb kernel list del entry feb kernel dentry kill feb kernel dput feb kernel d prune aliases feb kernel zfs prune feb kernel zpl prune sb feb kernel arc prune task feb kernel taskq thread feb kernel wake up state feb kernel taskq thread spawn feb kernel kthread feb kernel insert kthread work feb kernel ret from fork nospec begin feb kernel insert kthread work feb kernel feb kernel general protection fault smp feb kernel modules linked in nfsd nfs acl mgc oe lustre oe lmv oe mdc oe fid oe osc oe lov oe fld oe ptlrpc oe obdclass oe generic ksocklnd oe lnet oe libcfs oe rpcsec gss auth rpcgss dns resolver nfs lockd grace fscache garp mrp stp llc ipt reject nf reject nf conntrack nf defrag xt conntrack nf conntrack iptable filter sch fq tcp htcp zfs poe zunicode poe zlua poe zcommon poe znvpair poe zavl poe icp poe spl oe sunrpc dm mirror dm region hash dm log dm mod sg edac mod edac mce amd kvm amd kvm irqbypass pclmul pclmul intel ghash clmulni intel aesni intel lrw glue helper ablk helper cryptd ttm joydev ahci drm kms helper pcspkr libahci syscopyarea sysfillrect sysimgblt libata fb sys fops feb kernel drm ccp drm panel orientation quirks nfit ipmi si ipmi devintf libnvdimm ipmi msghandler pinctrl amd designware platform designware core acpi cpufreq binfmt misc ip tables igb algo bit dca ptp pps core bnxt en devlink sd mod crc common feb kernel cpu pid comm arc prune tainted p w oe feb kernel hardware name supermicro as in bios feb kernel task ti task ti feb kernel rip dentry kill feb kernel rsp eflags feb kernel rax rbx rcx feb kernel rdx rsi rdi feb kernel rbp feb kernel feb kernel feb kernel fs gs knlgs feb kernel cs ds es feb kernel feb kernel call trace feb kernel dput feb kernel d prune aliases feb kernel zfs prune feb kernel zpl prune sb feb kernel arc prune task feb kernel taskq thread feb kernel wake up state feb kernel taskq thread spawn feb kernel kthread feb kernel insert kthread work feb kernel ret from fork nospec begin feb kernel insert kthread work feb kernel code bb bb ff ff ff feb kernel rip dentry kill feb kernel rsp feb kernel
| 1
|
401,691
| 11,795,966,918
|
IssuesEvent
|
2020-03-18 09:56:08
|
softeng-701-group-5/softeng-701-assignment-1
|
https://api.github.com/repos/softeng-701-group-5/softeng-701-assignment-1
|
closed
|
Write a test suite for the HackerNews API
|
APPROVED :+1: HIGH PRIORITY enhancement
|
**User Story**
As a developer, I'd like to test the functionality of the HackerNews API, so that I can ensure that behaviour is as expected when making changes.
**Acceptance Criteria**
* HackerNewsFeedProviderTest.java configured with appropriate test cases
* All test cases pass
**Notes**
Test cases will include retrieving the home feed.
|
1.0
|
Write a test suite for the HackerNews API - **User Story**
As a developer, I'd like to test the functionality of the HackerNews API, so that I can ensure that behaviour is as expected when making changes.
**Acceptance Criteria**
* HackerNewsFeedProviderTest.java configured with appropriate test cases
* All test cases pass
**Notes**
Test cases will include retrieving the home feed.
|
non_defect
|
write a test suite for the hackernews api user story as a developer i d like to test the functionality of the hackernews api so that i can ensure that behaviour is as expected when making changes acceptance criteria hackernewsfeedprovidertest java configured with appropriate test cases all test cases pass notes test cases will include retrieving the home feed
| 0
|
144,600
| 19,292,281,752
|
IssuesEvent
|
2021-12-12 01:24:18
|
shukia/ServletAlfrescoRepository
|
https://api.github.com/repos/shukia/ServletAlfrescoRepository
|
opened
|
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.32.Final.jar
|
security vulnerability
|
## CVE-2021-43797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.32.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: ServletAlfrescoRepository/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.32.Final/netty-codec-http-4.1.32.Final.jar</p>
<p>
Dependency Hierarchy:
- camel-amqp-2.24.2.jar (Root Library)
- :x: **netty-codec-http-4.1.32.Final.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch.
<p>Publish Date: 2021-12-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p>
<p>Release Date: 2021-12-09</p>
<p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.32.Final.jar - ## CVE-2021-43797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.32.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: ServletAlfrescoRepository/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.32.Final/netty-codec-http-4.1.32.Final.jar</p>
<p>
Dependency Hierarchy:
- camel-amqp-2.24.2.jar (Root Library)
- :x: **netty-codec-http-4.1.32.Final.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch.
<p>Publish Date: 2021-12-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p>
<p>Release Date: 2021-12-09</p>
<p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in netty codec http final jar cve medium severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file servletalfrescorepository pom xml path to vulnerable library home wss scanner repository io netty netty codec http final netty codec http final jar dependency hierarchy camel amqp jar root library x netty codec http final jar vulnerable library found in base branch master vulnerability details netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients netty prior to version final skips control chars when they are present at the beginning end of the header name it should instead fail fast as these are not allowed by the spec and could lead to http request smuggling failing to do the validation might cause netty to sanitize header names before it forward these to another remote system when used as proxy this remote system can t see the invalid usage anymore and therefore does not do the validation itself users should upgrade to version final to receive a patch publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec http final step up your open source security game with whitesource
| 0
|
82,089
| 10,221,386,156
|
IssuesEvent
|
2019-08-16 01:25:37
|
quicwg/base-drafts
|
https://api.github.com/repos/quicwg/base-drafts
|
closed
|
Articulate principles for definition of error codes
|
-transport design has-consensus
|
We don't really have any real principles that we agree on for deciding what error codes we are describing.
Proposal:
1. if the error carries distinct semantics (like stream rejection in HTTP), then it gets a new error code
2. if the error is frequent or particularly significant, then it gets a new error code
3. otherwise, target a more generic error code that identifies the broad area of the protocol
4. finally, if there is no more specific applicable error code, use PROTOCOL_VIOLATION
|
1.0
|
Articulate principles for definition of error codes - We don't really have any real principles that we agree on for deciding what error codes we are describing.
Proposal:
1. if the error carries distinct semantics (like stream rejection in HTTP), then it gets a new error code
2. if the error is frequent or particularly significant, then it gets a new error code
3. otherwise, target a more generic error code that identifies the broad area of the protocol
4. finally, if there is no more specific applicable error code, use PROTOCOL_VIOLATION
|
non_defect
|
articulate principles for definition of error codes we don t really have any real principles that we agree on for deciding what error codes we are describing proposal if the error carries distinct semantics like stream rejection in http then it gets a new error code if the error is frequent or particularly significant then it gets a new error code otherwise target a more generic error code that identifies the broad area of the protocol finally if there is no more specific applicable error code use protocol violation
| 0
|
61,934
| 17,023,813,247
|
IssuesEvent
|
2021-07-03 03:59:41
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Mapnik problem: building is missing
|
Component: mapnik Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 7.11pm, Thursday, 9th August 2012]**
Missing buildings are right here, in the center: http://www.openstreetmap.org/?lat=54.887919&lon=83.118719&zoom=18&layers=M
The data for buildings that are missing has been there for 2 years already, so it is not likely that tiles are updated. See attached potlach screenshot for data.
Maybe the problem is level=1 key on one of the buildings, dunno
|
1.0
|
Mapnik problem: building is missing - **[Submitted to the original trac issue database at 7.11pm, Thursday, 9th August 2012]**
Missing buildings are right here, in the center: http://www.openstreetmap.org/?lat=54.887919&lon=83.118719&zoom=18&layers=M
The data for buildings that are missing has been there for 2 years already, so it is not likely that tiles are updated. See attached potlach screenshot for data.
Maybe the problem is level=1 key on one of the buildings, dunno
|
defect
|
mapnik problem building is missing missing buildings are right here in the center the data for buildings that are missing has been there for years already so it is not likely that tiles are updated see attached potlach screenshot for data maybe the problem is level key on one of the buildings dunno
| 1
|
74,435
| 25,125,434,774
|
IssuesEvent
|
2022-11-09 11:20:13
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
opened
|
Component: TreeTable
|
defect
|
### Describe the bug
Strange behavior of the global filter when there are identical values in different fields **TreeTable**
### Environment
TreeTable:
filterMode="strict"
### Reproducer
https://stackblitz.com/edit/primeng-treetablefilter-demo-weo4q6?file=src/app/app.component.html
### Angular version
14.1.0
### PrimeNG version
14.1.2
### Build / Runtime
Angular CLI App
### Language
TypeScript
### Node version (for AoT issues node --version)
16.14.0
### Browser(s)
Chrome 107.0.5304.107
### Steps to reproduce the behavior
Based on an example from the official documentation, I reproduced the following behavior:
1. I changed the composition of the data so that some fields of one record have the same values:
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
505 | 505 | Application
test | test | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
2. I enter words in the global search:
2.1. test"
2.2. folder"
2.3. 50"
### Expected behavior
### 1. When you enter the word "**test**":
There are 3 records in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
test | test | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
But only one is displayed in the table (not counting the parent)
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
test | test | Application
<!--EndFragment-->
</body>
</html>
**This is not expected behavior**
-
### 2. When you enter the word "**folder**":
There are 1 record in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
<!--EndFragment-->
</body>
</html>
I get the same result in the treetable.
**This is expected behavior**
-
### 3. When you enter the word "**50**":
There are 3 records in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
505 | 505 | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
But the table shows only two records (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
505 | 505 | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
**This is not expected behavior**
-
|
1.0
|
Component: TreeTable - ### Describe the bug
Strange behavior of the global filter when there are identical values in different fields **TreeTable**
### Environment
TreeTable:
filterMode="strict"
### Reproducer
https://stackblitz.com/edit/primeng-treetablefilter-demo-weo4q6?file=src/app/app.component.html
### Angular version
14.1.0
### PrimeNG version
14.1.2
### Build / Runtime
Angular CLI App
### Language
TypeScript
### Node version (for AoT issues node --version)
16.14.0
### Browser(s)
Chrome 107.0.5304.107
### Steps to reproduce the behavior
Based on an example from the official documentation, I reproduced the following behavior:
1. I changed the composition of the data so that some fields of one record have the same values:
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
505 | 505 | Application
test | test | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
2. I enter words in the global search:
2.1. test"
2.2. folder"
2.3. 50"
### Expected behavior
### 1. When you enter the word "**test**":
There are 3 records in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
test | test | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
But only one is displayed in the table (not counting the parent)
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
test | test | Application
<!--EndFragment-->
</body>
</html>
**This is not expected behavior**
-
### 2. When you enter the word "**folder**":
There are 1 record in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
<!--EndFragment-->
</body>
</html>
I get the same result in the treetable.
**This is expected behavior**
-
### 3. When you enter the word "**50**":
There are 3 records in the treetable, in the fields of which this word occurss (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
501 | test | Folder
505 | 505 | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
But the table shows only two records (not counting the parent):
<html>
<body>
<!--StartFragment-->
500 | test | Folder
-- | -- | --
505 | 505 | Application
505 | 505 | test
<!--EndFragment-->
</body>
</html>
**This is not expected behavior**
-
|
defect
|
component treetable describe the bug strange behavior of the global filter when there are identical values in different fields treetable environment treetable filtermode strict reproducer angular version primeng version build runtime angular cli app language typescript node version for aot issues node version browser s chrome steps to reproduce the behavior based on an example from the official documentation i reproduced the following behavior i changed the composition of the data so that some fields of one record have the same values test folder test folder application test test application test i enter words in the global search test folder expected behavior when you enter the word test there are records in the treetable in the fields of which this word occurss not counting the parent test folder test folder test test application test but only one is displayed in the table not counting the parent test folder test test application this is not expected behavior when you enter the word folder there are record in the treetable in the fields of which this word occurss not counting the parent test folder test folder i get the same result in the treetable this is expected behavior when you enter the word there are records in the treetable in the fields of which this word occurss not counting the parent test folder test folder application test but the table shows only two records not counting the parent test folder application test this is not expected behavior
| 1
|
25,503
| 4,350,829,552
|
IssuesEvent
|
2016-07-31 14:17:09
|
p6spy/p6spy
|
https://api.github.com/repos/p6spy/p6spy
|
closed
|
Exception in P6LogResultSetDelegate when query has no parameters
|
Defect
|
When using p6spy-2.3.1.jar from jFrog Bintray, I see an issue with queries that do not have any bound parameters. When reverting to p6spy-2.1.4.jar, the problem does not present itself. Perhaps it's related to the changes in cglib? Here is the stack trace when I get the exception:
```
java.lang.ArrayIndexOutOfBoundsException: 0
at com.p6spy.engine.logging.P6LogResultSetDelegate.invoke(P6LogResultSetDelegate.java:45)
at com.p6spy.engine.proxy.GenericInvocationHandler.invoke(GenericInvocationHandler.java:116)
at com.p6spy.engine.proxy.P6Proxy$$EnhancerByCGLIB$$96ffa73c.getResultSet(<generated>)
at coldfusion.server.j2ee.sql.JRunStatement.getResultSet(JRunStatement.java:283)
at coldfusion.sql.Executive.getRowSet(Executive.java:609)
at coldfusion.sql.Executive.executeQuery(Executive.java:1470)
at coldfusion.sql.Executive.executeQuery(Executive.java:1201)
at coldfusion.sql.Executive.executeQuery(Executive.java:1131)
at coldfusion.sql.SqlImpl.execute(SqlImpl.java:406)
at coldfusion.tagext.sql.QueryTag.executeQuery(QueryTag.java:1059)
at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:688)
...
```
I've been able to reproduce the problem with a simple query like this:
```
SELECT * FROM sys.tables
```
However, this query works just fine (when the application server performs its query statement processing and binds the VARCHAR parameter as "foo"):
```
SELECT * FROM sys.tables WHERE name = 'foo'
```
This is the content of my spy.properties file:
```
driverlist=net.sourceforge.jtds.jdbc.Driver,com.microsoft.sqlserver.jdbc.SQLServerDriver
reloadproperties=true
reloadpropertiesinterval=30
append=true
appender=com.p6spy.engine.spy.appender.FileLogger
logfile=/home/tomcat/servers/myapp/logs/spy.log
```
Switching from the Microsoft JDBC (v4.0.2206.100) driver to the jTDS JDBC (1.3.1) driver shows the same problem with a similar stack trace:
```
java.lang.ArrayIndexOutOfBoundsException: 0
at com.p6spy.engine.logging.P6LogResultSetDelegate.invoke(P6LogResultSetDelegate.java:45)
at com.p6spy.engine.proxy.GenericInvocationHandler.invoke(GenericInvocationHandler.java:116)
at $java.sql.Statement$$EnhancerByCGLIB$$207673e7.getResultSet(<generated>)
at coldfusion.server.j2ee.sql.JRunStatement.getResultSet(JRunStatement.java:283)
at coldfusion.sql.Executive.getRowSet(Executive.java:609)
at coldfusion.sql.Executive.executeQuery(Executive.java:1470)
at coldfusion.sql.Executive.executeQuery(Executive.java:1201)
at coldfusion.sql.Executive.executeQuery(Executive.java:1131)
at coldfusion.sql.SqlImpl.execute(SqlImpl.java:406)
at coldfusion.tagext.sql.QueryTag.executeQuery(QueryTag.java:1059)
at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:688)
...
```
|
1.0
|
Exception in P6LogResultSetDelegate when query has no parameters - When using p6spy-2.3.1.jar from jFrog Bintray, I see an issue with queries that do not have any bound parameters. When reverting to p6spy-2.1.4.jar, the problem does not present itself. Perhaps it's related to the changes in cglib? Here is the stack trace when I get the exception:
```
java.lang.ArrayIndexOutOfBoundsException: 0
at com.p6spy.engine.logging.P6LogResultSetDelegate.invoke(P6LogResultSetDelegate.java:45)
at com.p6spy.engine.proxy.GenericInvocationHandler.invoke(GenericInvocationHandler.java:116)
at com.p6spy.engine.proxy.P6Proxy$$EnhancerByCGLIB$$96ffa73c.getResultSet(<generated>)
at coldfusion.server.j2ee.sql.JRunStatement.getResultSet(JRunStatement.java:283)
at coldfusion.sql.Executive.getRowSet(Executive.java:609)
at coldfusion.sql.Executive.executeQuery(Executive.java:1470)
at coldfusion.sql.Executive.executeQuery(Executive.java:1201)
at coldfusion.sql.Executive.executeQuery(Executive.java:1131)
at coldfusion.sql.SqlImpl.execute(SqlImpl.java:406)
at coldfusion.tagext.sql.QueryTag.executeQuery(QueryTag.java:1059)
at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:688)
...
```
I've been able to reproduce the problem with a simple query like this:
```
SELECT * FROM sys.tables
```
However, this query works just fine (when the application server performs its query statement processing and binds the VARCHAR parameter as "foo"):
```
SELECT * FROM sys.tables WHERE name = 'foo'
```
This is the content of my spy.properties file:
```
driverlist=net.sourceforge.jtds.jdbc.Driver,com.microsoft.sqlserver.jdbc.SQLServerDriver
reloadproperties=true
reloadpropertiesinterval=30
append=true
appender=com.p6spy.engine.spy.appender.FileLogger
logfile=/home/tomcat/servers/myapp/logs/spy.log
```
Switching from the Microsoft JDBC (v4.0.2206.100) driver to the jTDS JDBC (1.3.1) driver shows the same problem with a similar stack trace:
```
java.lang.ArrayIndexOutOfBoundsException: 0
at com.p6spy.engine.logging.P6LogResultSetDelegate.invoke(P6LogResultSetDelegate.java:45)
at com.p6spy.engine.proxy.GenericInvocationHandler.invoke(GenericInvocationHandler.java:116)
at $java.sql.Statement$$EnhancerByCGLIB$$207673e7.getResultSet(<generated>)
at coldfusion.server.j2ee.sql.JRunStatement.getResultSet(JRunStatement.java:283)
at coldfusion.sql.Executive.getRowSet(Executive.java:609)
at coldfusion.sql.Executive.executeQuery(Executive.java:1470)
at coldfusion.sql.Executive.executeQuery(Executive.java:1201)
at coldfusion.sql.Executive.executeQuery(Executive.java:1131)
at coldfusion.sql.SqlImpl.execute(SqlImpl.java:406)
at coldfusion.tagext.sql.QueryTag.executeQuery(QueryTag.java:1059)
at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:688)
...
```
|
defect
|
exception in when query has no parameters when using jar from jfrog bintray i see an issue with queries that do not have any bound parameters when reverting to jar the problem does not present itself perhaps it s related to the changes in cglib here is the stack trace when i get the exception java lang arrayindexoutofboundsexception at com engine logging invoke java at com engine proxy genericinvocationhandler invoke genericinvocationhandler java at com engine proxy enhancerbycglib getresultset at coldfusion server sql jrunstatement getresultset jrunstatement java at coldfusion sql executive getrowset executive java at coldfusion sql executive executequery executive java at coldfusion sql executive executequery executive java at coldfusion sql executive executequery executive java at coldfusion sql sqlimpl execute sqlimpl java at coldfusion tagext sql querytag executequery querytag java at coldfusion tagext sql querytag doendtag querytag java i ve been able to reproduce the problem with a simple query like this select from sys tables however this query works just fine when the application server performs its query statement processing and binds the varchar parameter as foo select from sys tables where name foo this is the content of my spy properties file driverlist net sourceforge jtds jdbc driver com microsoft sqlserver jdbc sqlserverdriver reloadproperties true reloadpropertiesinterval append true appender com engine spy appender filelogger logfile home tomcat servers myapp logs spy log switching from the microsoft jdbc driver to the jtds jdbc driver shows the same problem with a similar stack trace java lang arrayindexoutofboundsexception at com engine logging invoke java at com engine proxy genericinvocationhandler invoke genericinvocationhandler java at java sql statement enhancerbycglib getresultset at coldfusion server sql jrunstatement getresultset jrunstatement java at coldfusion sql executive getrowset executive java at coldfusion sql executive executequery executive java at coldfusion sql executive executequery executive java at coldfusion sql executive executequery executive java at coldfusion sql sqlimpl execute sqlimpl java at coldfusion tagext sql querytag executequery querytag java at coldfusion tagext sql querytag doendtag querytag java
| 1
|
104,659
| 4,216,882,012
|
IssuesEvent
|
2016-06-30 10:54:28
|
coherence-community/oracle-bedrock
|
https://api.github.com/repos/coherence-community/oracle-bedrock
|
closed
|
Correct Existing Deferred implementation to permit broader casting of returned types
|
Module: Core Priority: Minor Type: Improvement
|
The Existing class currently only permits specifying a custom Class that is a sub-class of the specified value. While this is fine for most use-cases, it fails to allow up-casting.
For example, the following won't compile because Object is not a sub-class of String.
```java
Existing<Object> = new Existing("Hello", Object.class);
```
However, it's perfectly legal to perform a cast like this:
Object object = (Object)"Hello";
Resolving this issue is important as it will allow future casting between types, specially from Object to something else and vice-versa.
The change in method signature is completely backwards compatible as it simply removes a type restriction.
|
1.0
|
Correct Existing Deferred implementation to permit broader casting of returned types - The Existing class currently only permits specifying a custom Class that is a sub-class of the specified value. While this is fine for most use-cases, it fails to allow up-casting.
For example, the following won't compile because Object is not a sub-class of String.
```java
Existing<Object> = new Existing("Hello", Object.class);
```
However, it's perfectly legal to perform a cast like this:
Object object = (Object)"Hello";
Resolving this issue is important as it will allow future casting between types, specially from Object to something else and vice-versa.
The change in method signature is completely backwards compatible as it simply removes a type restriction.
|
non_defect
|
correct existing deferred implementation to permit broader casting of returned types the existing class currently only permits specifying a custom class that is a sub class of the specified value while this is fine for most use cases it fails to allow up casting for example the following won t compile because object is not a sub class of string java existing new existing hello object class however it s perfectly legal to perform a cast like this object object object hello resolving this issue is important as it will allow future casting between types specially from object to something else and vice versa the change in method signature is completely backwards compatible as it simply removes a type restriction
| 0
|
382,217
| 11,302,571,026
|
IssuesEvent
|
2020-01-17 17:59:24
|
mozilla/blurts-server
|
https://api.github.com/repos/mozilla/blurts-server
|
opened
|
Update breach resolution svg for passwords-secondary tout
|
Breach-Resolution priority-P2
|
Update the passwords (secondary) recommendation to use a different svg than the passwords (primary) recommendation. Screenshot of proposed change below, as well as the new svg.
<img width="881" alt="Screen Shot 2020-01-17 at 10 56 56 AM" src="https://user-images.githubusercontent.com/6164801/72634709-4f42d600-3918-11ea-8476-fc1a2711f2db.png">
[sharedpassword.svg.zip](https://github.com/mozilla/blurts-server/files/4078093/sharedpassword.svg.zip)
|
1.0
|
Update breach resolution svg for passwords-secondary tout - Update the passwords (secondary) recommendation to use a different svg than the passwords (primary) recommendation. Screenshot of proposed change below, as well as the new svg.
<img width="881" alt="Screen Shot 2020-01-17 at 10 56 56 AM" src="https://user-images.githubusercontent.com/6164801/72634709-4f42d600-3918-11ea-8476-fc1a2711f2db.png">
[sharedpassword.svg.zip](https://github.com/mozilla/blurts-server/files/4078093/sharedpassword.svg.zip)
|
non_defect
|
update breach resolution svg for passwords secondary tout update the passwords secondary recommendation to use a different svg than the passwords primary recommendation screenshot of proposed change below as well as the new svg img width alt screen shot at am src
| 0
|
64,507
| 18,720,809,951
|
IssuesEvent
|
2021-11-03 11:33:28
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
When sending an HTML message which is larger than 65K, the CSS doesn't go red when sending fails
|
T-Defect P2 S-Tolerable A-Timeline
|
Looks like triple-backtick block CSS overrides the red unsent warning
|
1.0
|
When sending an HTML message which is larger than 65K, the CSS doesn't go red when sending fails - Looks like triple-backtick block CSS overrides the red unsent warning
|
defect
|
when sending an html message which is larger than the css doesn t go red when sending fails looks like triple backtick block css overrides the red unsent warning
| 1
|
17,215
| 2,984,430,908
|
IssuesEvent
|
2015-07-18 00:49:55
|
google/omaha
|
https://api.github.com/repos/google/omaha
|
closed
|
Need control to set time to check for and download updates
|
auto-migrated Priority-Medium Type-Defect wontfix
|
```
I am on Wireless internet, which is quota based.
I need a control to set the time of the day that is off peak and Google Update
can go ahead to check for and download updates without hitting my peak hour
quota.
E.g. for Windows Updates, the downloading can be controlled using the BITS keys
in the registry.
I expect software called "Software installer and auto-updater for Windows" to
have this kind of control.
Kind regards,
```
Original issue reported on code.google.com by `daniel.c...@gmail.com` on 10 Dec 2013 at 2:45
|
1.0
|
Need control to set time to check for and download updates - ```
I am on Wireless internet, which is quota based.
I need a control to set the time of the day that is off peak and Google Update
can go ahead to check for and download updates without hitting my peak hour
quota.
E.g. for Windows Updates, the downloading can be controlled using the BITS keys
in the registry.
I expect software called "Software installer and auto-updater for Windows" to
have this kind of control.
Kind regards,
```
Original issue reported on code.google.com by `daniel.c...@gmail.com` on 10 Dec 2013 at 2:45
|
defect
|
need control to set time to check for and download updates i am on wireless internet which is quota based i need a control to set the time of the day that is off peak and google update can go ahead to check for and download updates without hitting my peak hour quota e g for windows updates the downloading can be controlled using the bits keys in the registry i expect software called software installer and auto updater for windows to have this kind of control kind regards original issue reported on code google com by daniel c gmail com on dec at
| 1
|
13,052
| 2,732,890,283
|
IssuesEvent
|
2015-04-17 10:01:15
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
ant build-all failed
|
auto-migrated Priority-Critical Type-Defect
|
```
What steps will reproduce the problem?
1. checkout source
2. ant build-all
3.
What is the expected output?
build succeeded.
What do you see instead?
[echo]
[echo] Created profile js-Files
[echo]
[java] [ERROR] 4119:41:unterminated string literal
[java] [ERROR] 4119:41:syntax error
[java] [ERROR] 4120:20:syntax error
[java] [ERROR] 4121:7:syntax error
[java] [ERROR] 4122:21:syntax error
[java] [ERROR] 4123:11:syntax error
[java] [ERROR] 4124:6:syntax error
[java] [ERROR] 4126:3:syntax error
[java] [ERROR] 4127:22:syntax error
[java] [ERROR] 4135:12:syntax error
[java] [ERROR] 4136:13:syntax error
[java] [ERROR] 4295:2:syntax error
[java] [ERROR] 4297:10:syntax error
[java] [ERROR] 4298:8:syntax error
[java] [ERROR] 4299:2:syntax error
[java] [ERROR] 4304:9:syntax error
[java] [ERROR] 4305:71:missing ; before statement
[java] [ERROR] 4306:24:missing ; before statement
[java] [ERROR] 4307:10:syntax error
[java] [ERROR] 4374:4:syntax error
[java] [ERROR] 4375:10:syntax error
[java] [ERROR] 4376:4:syntax error
[java] [ERROR] 4378:2:syntax error
[java] [ERROR] 4380:32:syntax error
[java] [ERROR] 4458:10:syntax error
[java] [ERROR] 4463:2:syntax error
[java] [ERROR] 4469:16:syntax error
[java] [ERROR] 4510:10:syntax error
[java] [ERROR] 4512:9:syntax error
[java] [ERROR] 4513:3:syntax error
[java] [ERROR] 4516:22:missing ; before statement
[java] [ERROR] 4517:9:syntax error
[java] [ERROR] 4518:3:syntax error
[java] [ERROR] 4519:3:syntax error
[java] [ERROR] 4528:25:syntax error
[java] [ERROR] 4529:90:missing ; before statement
[java] [ERROR] 4530:33:missing ; before statement
[java] [ERROR] 4535:4:syntax error
[java] [ERROR] 4537:4:syntax error
[java] [ERROR] 4538:9:syntax error
[java] [ERROR] 4539:33:missing ; before statement
[java] [ERROR] 4541:4:syntax error
[java] [ERROR] 4543:4:syntax error
[java] [ERROR] 4545:2:syntax error
[java] [ERROR] 4552:21:syntax error
[java] [ERROR] 4558:9:invalid return
[java] [ERROR] 4559:2:syntax error
[java] [ERROR] 4566:26:syntax error
[java] [ERROR] 4567:10:syntax error
[java] [ERROR] 4567:53:missing ; before statement
[java] [ERROR] 4568:56:missing ; before statement
[java] [ERROR] 4570:4:syntax error
[java] [ERROR] 4572:9:invalid return
[java] [ERROR] 4573:3:syntax error
[java] [ERROR] 4575:28:syntax error
[java] [ERROR] 4581:11:invalid return
[java] [ERROR] 4583:11:invalid return
[java] [ERROR] 4586:10:invalid return
[java] [ERROR] 4588:2:syntax error
[java] [ERROR] 4594:7:syntax error
[java] [ERROR] 4595:29:missing ; before statement
[java] [ERROR] 4596:10:missing ; before statement
[java] [ERROR] 4598:10:unlabelled break must be inside loop or switch
[java] [ERROR] 4600:8:syntax error
[java] [ERROR] 4610:10:unlabelled break must be inside loop or switch
[java] [ERROR] 4612:12:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4615:3:syntax error
[java] [ERROR] 4616:3:syntax error
[java] [ERROR] 4618:16:syntax error
[java] [ERROR] 4619:28:missing ; before statement
[java] [ERROR] 4620:10:missing ; before statement
[java] [ERROR] 4622:8:syntax error
[java] [ERROR] 4623:58:missing ; before statement
[java] [ERROR] 4629:5:syntax error
[java] [ERROR] 4632:11:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4635:3:syntax error
[java] [ERROR] 4636:3:syntax error
[java] [ERROR] 4638:18:syntax error
[java] [ERROR] 4639:28:missing ; before statement
[java] [ERROR] 4640:10:missing ; before statement
[java] [ERROR] 4642:8:syntax error
[java] [ERROR] 4643:58:missing ; before statement
[java] [ERROR] 4649:5:syntax error
[java] [ERROR] 4652:11:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4655:3:syntax error
[java] [ERROR] 4656:3:syntax error
[java] [ERROR] 4658:9:syntax error
[java] [ERROR] 4659:28:missing ; before statement
[java] [ERROR] 4660:10:missing ; before statement
[java] [ERROR] 4662:8:syntax error
[java] [ERROR] 4663:39:missing ; before statement
[java] [ERROR] 4664:11:syntax error
[java] [ERROR] 4667:5:syntax error
[java] [ERROR] 4668:10:missing ; before statement
[java] [ERROR] 4672:49:missing ; before statement
[java] [ERROR] 4673:11:syntax error
[java] [ERROR] 4676:5:syntax error
[java] [ERROR] 4679:2:syntax error
[java] [ERROR] 4681:7:syntax error
[java] [ERROR] 4682:22:missing ; before statement
[java] [ERROR] 4683:9:syntax error
[java] [ERROR] 4685:3:syntax error
[java] [ERROR] 4686:3:syntax error
[java] [ERROR] 4688:7:syntax error
[java] [ERROR] 4689:23:missing ; before statement
[java] [ERROR] 4690:9:syntax error
[java] [ERROR] 4692:3:syntax error
[java] [ERROR] 4693:3:syntax error
[java] [ERROR] 4700:23:syntax error
[java] [ERROR] 4702:10:invalid return
[java] [ERROR] 4706:10:invalid return
[java] [ERROR] 4708:10:invalid return
[java] [ERROR] 4710:2:syntax error
[java] [ERROR] 4712:14:syntax error
[java] [ERROR] 4737:9:invalid return
[java] [ERROR] 4738:2:syntax error
[java] [ERROR] 4745:23:syntax error
[java] [ERROR] 4746:9:syntax error
[java] [ERROR] 4754:9:invalid return
[java] [ERROR] 4755:2:syntax error
[java] [ERROR] 4763:18:syntax error
[java] [ERROR] 4765:2:syntax error
[java] [ERROR] 4774:15:syntax error
[java] [ERROR] 4776:2:syntax error
[java] [ERROR] 4778:11:syntax error
[java] [ERROR] 4779:3:syntax error
[java] [ERROR] 1:0:Compilation produced 126 syntax errors.
[java] org.mozilla.javascript.EvaluatorException: Compilation
produced 126 syntax errors.
[java] at
com.yahoo.platform.yui.compressor.YUICompressor$1.runtimeError
(YUICompressor.java:135)
[java] at org.mozilla.javascript.Parser.parse(Parser.java:410)
[java] at org.mozilla.javascript.Parser.parse(Parser.java:355)
[java] at
com.yahoo.platform.yui.compressor.JavaScriptCompressor.parse
(JavaScriptCompressor.java:312)
[java] at
com.yahoo.platform.yui.compressor.JavaScriptCompressor.<init>
(JavaScriptCompressor.java:533)
[java] at com.yahoo.platform.yui.compressor.YUICompressor.main
(YUICompressor.java:112)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
Source)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
Source)
[java] at java.lang.reflect.Method.invoke(Unknown Source)
[java] at com.yahoo.platform.yui.compressor.Bootstrap.main
(Bootstrap.java:20)
BUILD FAILED
E:\oryx\editor\build.xml:214: Java returned: 2
Please provide any additional information below.
```
Original issue reported on code.google.com by `weiwells...@sun.com` on 16 Dec 2009 at 5:17
|
1.0
|
ant build-all failed - ```
What steps will reproduce the problem?
1. checkout source
2. ant build-all
3.
What is the expected output?
build succeeded.
What do you see instead?
[echo]
[echo] Created profile js-Files
[echo]
[java] [ERROR] 4119:41:unterminated string literal
[java] [ERROR] 4119:41:syntax error
[java] [ERROR] 4120:20:syntax error
[java] [ERROR] 4121:7:syntax error
[java] [ERROR] 4122:21:syntax error
[java] [ERROR] 4123:11:syntax error
[java] [ERROR] 4124:6:syntax error
[java] [ERROR] 4126:3:syntax error
[java] [ERROR] 4127:22:syntax error
[java] [ERROR] 4135:12:syntax error
[java] [ERROR] 4136:13:syntax error
[java] [ERROR] 4295:2:syntax error
[java] [ERROR] 4297:10:syntax error
[java] [ERROR] 4298:8:syntax error
[java] [ERROR] 4299:2:syntax error
[java] [ERROR] 4304:9:syntax error
[java] [ERROR] 4305:71:missing ; before statement
[java] [ERROR] 4306:24:missing ; before statement
[java] [ERROR] 4307:10:syntax error
[java] [ERROR] 4374:4:syntax error
[java] [ERROR] 4375:10:syntax error
[java] [ERROR] 4376:4:syntax error
[java] [ERROR] 4378:2:syntax error
[java] [ERROR] 4380:32:syntax error
[java] [ERROR] 4458:10:syntax error
[java] [ERROR] 4463:2:syntax error
[java] [ERROR] 4469:16:syntax error
[java] [ERROR] 4510:10:syntax error
[java] [ERROR] 4512:9:syntax error
[java] [ERROR] 4513:3:syntax error
[java] [ERROR] 4516:22:missing ; before statement
[java] [ERROR] 4517:9:syntax error
[java] [ERROR] 4518:3:syntax error
[java] [ERROR] 4519:3:syntax error
[java] [ERROR] 4528:25:syntax error
[java] [ERROR] 4529:90:missing ; before statement
[java] [ERROR] 4530:33:missing ; before statement
[java] [ERROR] 4535:4:syntax error
[java] [ERROR] 4537:4:syntax error
[java] [ERROR] 4538:9:syntax error
[java] [ERROR] 4539:33:missing ; before statement
[java] [ERROR] 4541:4:syntax error
[java] [ERROR] 4543:4:syntax error
[java] [ERROR] 4545:2:syntax error
[java] [ERROR] 4552:21:syntax error
[java] [ERROR] 4558:9:invalid return
[java] [ERROR] 4559:2:syntax error
[java] [ERROR] 4566:26:syntax error
[java] [ERROR] 4567:10:syntax error
[java] [ERROR] 4567:53:missing ; before statement
[java] [ERROR] 4568:56:missing ; before statement
[java] [ERROR] 4570:4:syntax error
[java] [ERROR] 4572:9:invalid return
[java] [ERROR] 4573:3:syntax error
[java] [ERROR] 4575:28:syntax error
[java] [ERROR] 4581:11:invalid return
[java] [ERROR] 4583:11:invalid return
[java] [ERROR] 4586:10:invalid return
[java] [ERROR] 4588:2:syntax error
[java] [ERROR] 4594:7:syntax error
[java] [ERROR] 4595:29:missing ; before statement
[java] [ERROR] 4596:10:missing ; before statement
[java] [ERROR] 4598:10:unlabelled break must be inside loop or switch
[java] [ERROR] 4600:8:syntax error
[java] [ERROR] 4610:10:unlabelled break must be inside loop or switch
[java] [ERROR] 4612:12:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4615:3:syntax error
[java] [ERROR] 4616:3:syntax error
[java] [ERROR] 4618:16:syntax error
[java] [ERROR] 4619:28:missing ; before statement
[java] [ERROR] 4620:10:missing ; before statement
[java] [ERROR] 4622:8:syntax error
[java] [ERROR] 4623:58:missing ; before statement
[java] [ERROR] 4629:5:syntax error
[java] [ERROR] 4632:11:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4635:3:syntax error
[java] [ERROR] 4636:3:syntax error
[java] [ERROR] 4638:18:syntax error
[java] [ERROR] 4639:28:missing ; before statement
[java] [ERROR] 4640:10:missing ; before statement
[java] [ERROR] 4642:8:syntax error
[java] [ERROR] 4643:58:missing ; before statement
[java] [ERROR] 4649:5:syntax error
[java] [ERROR] 4652:11:not a valid default namespace statement.
Syntax is: default xml namespace = EXPRESSION;
[java] [ERROR] 4655:3:syntax error
[java] [ERROR] 4656:3:syntax error
[java] [ERROR] 4658:9:syntax error
[java] [ERROR] 4659:28:missing ; before statement
[java] [ERROR] 4660:10:missing ; before statement
[java] [ERROR] 4662:8:syntax error
[java] [ERROR] 4663:39:missing ; before statement
[java] [ERROR] 4664:11:syntax error
[java] [ERROR] 4667:5:syntax error
[java] [ERROR] 4668:10:missing ; before statement
[java] [ERROR] 4672:49:missing ; before statement
[java] [ERROR] 4673:11:syntax error
[java] [ERROR] 4676:5:syntax error
[java] [ERROR] 4679:2:syntax error
[java] [ERROR] 4681:7:syntax error
[java] [ERROR] 4682:22:missing ; before statement
[java] [ERROR] 4683:9:syntax error
[java] [ERROR] 4685:3:syntax error
[java] [ERROR] 4686:3:syntax error
[java] [ERROR] 4688:7:syntax error
[java] [ERROR] 4689:23:missing ; before statement
[java] [ERROR] 4690:9:syntax error
[java] [ERROR] 4692:3:syntax error
[java] [ERROR] 4693:3:syntax error
[java] [ERROR] 4700:23:syntax error
[java] [ERROR] 4702:10:invalid return
[java] [ERROR] 4706:10:invalid return
[java] [ERROR] 4708:10:invalid return
[java] [ERROR] 4710:2:syntax error
[java] [ERROR] 4712:14:syntax error
[java] [ERROR] 4737:9:invalid return
[java] [ERROR] 4738:2:syntax error
[java] [ERROR] 4745:23:syntax error
[java] [ERROR] 4746:9:syntax error
[java] [ERROR] 4754:9:invalid return
[java] [ERROR] 4755:2:syntax error
[java] [ERROR] 4763:18:syntax error
[java] [ERROR] 4765:2:syntax error
[java] [ERROR] 4774:15:syntax error
[java] [ERROR] 4776:2:syntax error
[java] [ERROR] 4778:11:syntax error
[java] [ERROR] 4779:3:syntax error
[java] [ERROR] 1:0:Compilation produced 126 syntax errors.
[java] org.mozilla.javascript.EvaluatorException: Compilation
produced 126 syntax errors.
[java] at
com.yahoo.platform.yui.compressor.YUICompressor$1.runtimeError
(YUICompressor.java:135)
[java] at org.mozilla.javascript.Parser.parse(Parser.java:410)
[java] at org.mozilla.javascript.Parser.parse(Parser.java:355)
[java] at
com.yahoo.platform.yui.compressor.JavaScriptCompressor.parse
(JavaScriptCompressor.java:312)
[java] at
com.yahoo.platform.yui.compressor.JavaScriptCompressor.<init>
(JavaScriptCompressor.java:533)
[java] at com.yahoo.platform.yui.compressor.YUICompressor.main
(YUICompressor.java:112)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
Source)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
Source)
[java] at java.lang.reflect.Method.invoke(Unknown Source)
[java] at com.yahoo.platform.yui.compressor.Bootstrap.main
(Bootstrap.java:20)
BUILD FAILED
E:\oryx\editor\build.xml:214: Java returned: 2
Please provide any additional information below.
```
Original issue reported on code.google.com by `weiwells...@sun.com` on 16 Dec 2009 at 5:17
|
defect
|
ant build all failed what steps will reproduce the problem checkout source ant build all what is the expected output build succeeded what do you see instead created profile js files unterminated string literal syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error missing before statement missing before statement syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error syntax error missing before statement syntax error syntax error syntax error syntax error missing before statement missing before statement syntax error syntax error syntax error missing before statement syntax error syntax error syntax error syntax error invalid return syntax error syntax error syntax error missing before statement missing before statement syntax error invalid return syntax error syntax error invalid return invalid return invalid return syntax error syntax error missing before statement missing before statement unlabelled break must be inside loop or switch syntax error unlabelled break must be inside loop or switch not a valid default namespace statement syntax is default xml namespace expression syntax error syntax error syntax error missing before statement missing before statement syntax error missing before statement syntax error not a valid default namespace statement syntax is default xml namespace expression syntax error syntax error syntax error missing before statement missing before statement syntax error missing before statement syntax error not a valid default namespace statement syntax is default xml namespace expression syntax error syntax error syntax error missing before statement missing before statement syntax error missing before statement syntax error syntax error missing before statement missing before statement syntax error syntax error syntax error syntax error missing before statement syntax error syntax error syntax error syntax error missing before statement syntax error syntax error syntax error syntax error invalid return invalid return invalid return syntax error syntax error invalid return syntax error syntax error syntax error invalid return syntax error syntax error syntax error syntax error syntax error syntax error syntax error compilation produced syntax errors org mozilla javascript evaluatorexception compilation produced syntax errors at com yahoo platform yui compressor yuicompressor runtimeerror yuicompressor java at org mozilla javascript parser parse parser java at org mozilla javascript parser parse parser java at com yahoo platform yui compressor javascriptcompressor parse javascriptcompressor java at com yahoo platform yui compressor javascriptcompressor javascriptcompressor java at com yahoo platform yui compressor yuicompressor main yuicompressor java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at com yahoo platform yui compressor bootstrap main bootstrap java build failed e oryx editor build xml java returned please provide any additional information below original issue reported on code google com by weiwells sun com on dec at
| 1
|
22,381
| 3,642,212,384
|
IssuesEvent
|
2016-02-14 05:42:57
|
BOINC/boinc
|
https://api.github.com/repos/BOINC/boinc
|
closed
|
fix boinc build with current xcb-atom
|
C: Undetermined P: Undetermined T: Defect
|
**Reported by mjakubicek on 24 Mar 42106922 11:01 UTC**
See the attached patch that fixes the following error when building with libxcb 1.8:
```
screensaver_x11.cpp: In function 'int main(int, char**)':
screensaver_x11.cpp:532:72: error: 'xcb_atom_get' was not declared in this scope
screensaver_x11.cpp:534:65: error: 'WINDOW' was not declared in this scope
screensaver_x11.cpp:550:33: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
screensaver_x11.cpp:555:57: error: 'WM_COMMAND' was not declared in this scope
screensaver_x11.cpp:555:69: error: 'STRING' was not declared in this scope
screensaver_x11.cpp:572:57: error: 'WM_CLASS' was not declared in this scope
```
Migrated-From: http://boinc.berkeley.edu/trac/ticket/1174
|
1.0
|
fix boinc build with current xcb-atom - **Reported by mjakubicek on 24 Mar 42106922 11:01 UTC**
See the attached patch that fixes the following error when building with libxcb 1.8:
```
screensaver_x11.cpp: In function 'int main(int, char**)':
screensaver_x11.cpp:532:72: error: 'xcb_atom_get' was not declared in this scope
screensaver_x11.cpp:534:65: error: 'WINDOW' was not declared in this scope
screensaver_x11.cpp:550:33: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
screensaver_x11.cpp:555:57: error: 'WM_COMMAND' was not declared in this scope
screensaver_x11.cpp:555:69: error: 'STRING' was not declared in this scope
screensaver_x11.cpp:572:57: error: 'WM_CLASS' was not declared in this scope
```
Migrated-From: http://boinc.berkeley.edu/trac/ticket/1174
|
defect
|
fix boinc build with current xcb atom reported by mjakubicek on mar utc see the attached patch that fixes the following error when building with libxcb screensaver cpp in function int main int char screensaver cpp error xcb atom get was not declared in this scope screensaver cpp error window was not declared in this scope screensaver cpp warning comparison between signed and unsigned integer expressions screensaver cpp error wm command was not declared in this scope screensaver cpp error string was not declared in this scope screensaver cpp error wm class was not declared in this scope migrated from
| 1
|
42,181
| 10,864,926,513
|
IssuesEvent
|
2019-11-14 17:52:20
|
jccastillo0007/eFacturaT
|
https://api.github.com/repos/jccastillo0007/eFacturaT
|
opened
|
Condominios - CxC Ingresar Cobro, DP, leyenda incorrecta cuando se registra un pago a una SP automática
|
bug defect
|
CUANDO REGISTRAS UN PAGO A UNA SP QUE SE GENERÓ AUTOMÁTICAMENTE, NO ENVIA AL PDF DEL DOCUMENTO DE PAGO, LA LEYENDA CORRECTA
EN PRU:
LA SP1 SE GENERÓ MANUALMENTE.
EL DP1 SE GENERÓ CORRECTAMENTE EN LA LEYENDA DE “Concepto del Pago”: CUOTA DE MANTENIMIENTO CORRESPONDIENTE AL MES DE NOVIEMBRE
CORRESPONDIENTE AL MES DE NOVIEMBRE, FUE LA INFORMACIÓN COMPLEMENTARIA DE LA SP CAPTURADA MANUALMENTE.
LA SP2 SE GENERÓ AUTOMÁTICAMENTE.
EL DP2 SE GENERÓ INCORRECTAMENTE EN LA LEYENDA DE “Concepto del Pago”, ya que no incluye el mes: CUOTA DE MANTENIMIENTO
|
1.0
|
Condominios - CxC Ingresar Cobro, DP, leyenda incorrecta cuando se registra un pago a una SP automática - CUANDO REGISTRAS UN PAGO A UNA SP QUE SE GENERÓ AUTOMÁTICAMENTE, NO ENVIA AL PDF DEL DOCUMENTO DE PAGO, LA LEYENDA CORRECTA
EN PRU:
LA SP1 SE GENERÓ MANUALMENTE.
EL DP1 SE GENERÓ CORRECTAMENTE EN LA LEYENDA DE “Concepto del Pago”: CUOTA DE MANTENIMIENTO CORRESPONDIENTE AL MES DE NOVIEMBRE
CORRESPONDIENTE AL MES DE NOVIEMBRE, FUE LA INFORMACIÓN COMPLEMENTARIA DE LA SP CAPTURADA MANUALMENTE.
LA SP2 SE GENERÓ AUTOMÁTICAMENTE.
EL DP2 SE GENERÓ INCORRECTAMENTE EN LA LEYENDA DE “Concepto del Pago”, ya que no incluye el mes: CUOTA DE MANTENIMIENTO
|
defect
|
condominios cxc ingresar cobro dp leyenda incorrecta cuando se registra un pago a una sp automática cuando registras un pago a una sp que se generó automáticamente no envia al pdf del documento de pago la leyenda correcta en pru la se generó manualmente el se generó correctamente en la leyenda de “concepto del pago” cuota de mantenimiento correspondiente al mes de noviembre correspondiente al mes de noviembre fue la información complementaria de la sp capturada manualmente la se generó automáticamente el se generó incorrectamente en la leyenda de “concepto del pago” ya que no incluye el mes cuota de mantenimiento
| 1
|
8,755
| 4,318,903,018
|
IssuesEvent
|
2016-07-24 10:11:53
|
opencv/opencv
|
https://api.github.com/repos/opencv/opencv
|
closed
|
issue with building from source
|
bug category: build/install
|
<!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
- Operating System / Platform => ubuntu 16.04
- Compiler => g++/cmake
##### Detailed description
<!-- your description -->
make command showing error after successful completion of cmake while building from source.
following this-> https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-OpenCV-3.1-Installation-Guide

##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
|
1.0
|
issue with building from source - <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
- Operating System / Platform => ubuntu 16.04
- Compiler => g++/cmake
##### Detailed description
<!-- your description -->
make command showing error after successful completion of cmake while building from source.
following this-> https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-OpenCV-3.1-Installation-Guide

##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
|
non_defect
|
issue with building from source if you have a question rather than reporting a bug please go to where you get much faster responses if you need further assistance please read this is a template helping you to create an issue which can be processed as quickly as possible this is the bug reporting section for the opencv library system information version example opencv operating system platform windows bit compiler visual studio opencv master operating system platform ubuntu compiler g cmake detailed description make command showing error after successful completion of cmake while building from source following this steps to reproduce to add code example fence it with triple backticks and optional file extension cpp c code example or attach as txt or zip file
| 0
|
11,246
| 16,738,229,070
|
IssuesEvent
|
2021-06-11 06:25:40
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
closed
|
Python dependency update doesn't respect '<'
|
priority-5-triage status:requirements type:bug
|
<!--
PLEASE DO NOT REPORT ANY SECURITY CONCERNS THIS WAY
Email renovate-disclosure@whitesourcesoftware.com instead.
-->
**How are you running Renovate?**
- [x] WhiteSource Renovate hosted app on github.com
- [ ] Self hosted
**Describe the bug**
For Python pip dependencies, the Renovate bot doesn't respect `<`.
For example, this PR: https://github.com/IBM/MAX-Base/pull/58 . The Renovate bot tried to upgrade `flask>=1.1.2,<2.0` to `flask==2.0.1`. What it should do is to upgrade it to `flask==1.1.4,<2.0`.
**Have you created a minimal reproduction repository?**
Please read the [minimal reproductions documentation](https://github.com/renovatebot/renovate/blob/main/docs/development/minimal-reproductions.md) to learn how to make a good minimal reproduction repository.
- [ ] I have provided a minimal reproduction repository
- [x] I don't have time for that, but it happens in a public repository I have linked to
- [ ] I don't have time for that, and cannot share my private repository
- [ ] The nature of this bug means it's impossible to reproduce publicly
|
1.0
|
Python dependency update doesn't respect '<' - <!--
PLEASE DO NOT REPORT ANY SECURITY CONCERNS THIS WAY
Email renovate-disclosure@whitesourcesoftware.com instead.
-->
**How are you running Renovate?**
- [x] WhiteSource Renovate hosted app on github.com
- [ ] Self hosted
**Describe the bug**
For Python pip dependencies, the Renovate bot doesn't respect `<`.
For example, this PR: https://github.com/IBM/MAX-Base/pull/58 . The Renovate bot tried to upgrade `flask>=1.1.2,<2.0` to `flask==2.0.1`. What it should do is to upgrade it to `flask==1.1.4,<2.0`.
**Have you created a minimal reproduction repository?**
Please read the [minimal reproductions documentation](https://github.com/renovatebot/renovate/blob/main/docs/development/minimal-reproductions.md) to learn how to make a good minimal reproduction repository.
- [ ] I have provided a minimal reproduction repository
- [x] I don't have time for that, but it happens in a public repository I have linked to
- [ ] I don't have time for that, and cannot share my private repository
- [ ] The nature of this bug means it's impossible to reproduce publicly
|
non_defect
|
python dependency update doesn t respect please do not report any security concerns this way email renovate disclosure whitesourcesoftware com instead how are you running renovate whitesource renovate hosted app on github com self hosted describe the bug for python pip dependencies the renovate bot doesn t respect for example this pr the renovate bot tried to upgrade flask to flask what it should do is to upgrade it to flask have you created a minimal reproduction repository please read the to learn how to make a good minimal reproduction repository i have provided a minimal reproduction repository i don t have time for that but it happens in a public repository i have linked to i don t have time for that and cannot share my private repository the nature of this bug means it s impossible to reproduce publicly
| 0
|
169,557
| 6,404,126,992
|
IssuesEvent
|
2017-08-07 01:01:35
|
UGXaero/UGXrealms
|
https://api.github.com/repos/UGXaero/UGXrealms
|
closed
|
[clothing] Server crash
|
bug high priority server crash
|
```
2017-08-05 16:47:47: ERROR[Main]: ServerError: AsyncErr: ServerThread::run Lua: Runtime error from mod '??' in callback detached_inventory_OnMove(): ...servers/.minetest/games/UGXrealms/mods/clothing/init.lua:79: attempt to call method 'update_inventory' (a nil value)
2017-08-05 16:47:47: ERROR[Main]: Stack Traceback
2017-08-05 16:47:47: ERROR[Main]: ===============
2017-08-05 16:47:47: ERROR[Main]: (2) Lua function '(anonymous)' at file '/home/minetestservers/.minetest/games/UGXrealms/mods/clothing/init.lua:79' (best guess)
2017-08-05 16:47:47: ERROR[Main]: Local variables:
2017-08-05 16:47:47: ERROR[Main]: inv = userdata: 0x40eb7b28
2017-08-05 16:47:47: ERROR[Main]: from_list = string: "clothing"
2017-08-05 16:47:47: ERROR[Main]: from_index = number: 2
2017-08-05 16:47:47: ERROR[Main]: to_list = string: "clothing"
2017-08-05 16:47:47: ERROR[Main]: to_index = number: 4
2017-08-05 16:47:47: ERROR[Main]: count = number: 1
2017-08-05 16:47:47: ERROR[Main]: player = userdata: 0x41de4190
2017-08-05 16:47:47: ERROR[Main]: plaver_inv = userdata: 0x40eb7b50
2017-08-05 16:47:47: ERROR[Main]: stack = userdata: 0x40eb7b78
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = nil
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = table: 0x401cb858 {run_callbacks:function: 0x419ad538, set_player_clothing:function: 0x402099c0, register_on_equip:function: 0x414a7378, formspec:size[8,8.5]bgcolor[#080808BB;true]background[5,5;1,1;gui_formbg.png;true]listcolors[#000000
2017-08-05 16:47:47: ERROR[Main]: 9;#5A5A5A;#141318;#30434C;#FFF]list[current_player;main;0,4.7;8,1;]list[current_player;main;0,5.85;8,3;8]image[0,4.7;1,1;gui_hb_bg.png]image[1,4.7;1,1;gui_hb_bg.png]image[2,4.7;1,1;gui_hb_bg.png]image[3,4.7;1,1;gui_hb_bg.png]image[4,4.7;1,1;gui_hb_bg.png]
2017-08-05 16:47:47: ERROR[Main]: mage[5,4.7;1,1;gui_hb_bg.png]image[6,4.7;1,1;gui_hb_bg.png]image[7,4.7;1,1;gui_hb_bg.png], register_on_update:function: 0x419ae378, inv_mod:unified_inventory, registered_callbacks:table: 0x419ae258, register_on_unequip:function: 0x419ad518}
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = userdata: 0x41de4190
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = string: "Xio"
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = userdata: 0x40eb8000
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = string: "attempt to call method 'update_inventory' (a nil value)"
```
:large_orange_diamond:
|
1.0
|
[clothing] Server crash - ```
2017-08-05 16:47:47: ERROR[Main]: ServerError: AsyncErr: ServerThread::run Lua: Runtime error from mod '??' in callback detached_inventory_OnMove(): ...servers/.minetest/games/UGXrealms/mods/clothing/init.lua:79: attempt to call method 'update_inventory' (a nil value)
2017-08-05 16:47:47: ERROR[Main]: Stack Traceback
2017-08-05 16:47:47: ERROR[Main]: ===============
2017-08-05 16:47:47: ERROR[Main]: (2) Lua function '(anonymous)' at file '/home/minetestservers/.minetest/games/UGXrealms/mods/clothing/init.lua:79' (best guess)
2017-08-05 16:47:47: ERROR[Main]: Local variables:
2017-08-05 16:47:47: ERROR[Main]: inv = userdata: 0x40eb7b28
2017-08-05 16:47:47: ERROR[Main]: from_list = string: "clothing"
2017-08-05 16:47:47: ERROR[Main]: from_index = number: 2
2017-08-05 16:47:47: ERROR[Main]: to_list = string: "clothing"
2017-08-05 16:47:47: ERROR[Main]: to_index = number: 4
2017-08-05 16:47:47: ERROR[Main]: count = number: 1
2017-08-05 16:47:47: ERROR[Main]: player = userdata: 0x41de4190
2017-08-05 16:47:47: ERROR[Main]: plaver_inv = userdata: 0x40eb7b50
2017-08-05 16:47:47: ERROR[Main]: stack = userdata: 0x40eb7b78
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = nil
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = table: 0x401cb858 {run_callbacks:function: 0x419ad538, set_player_clothing:function: 0x402099c0, register_on_equip:function: 0x414a7378, formspec:size[8,8.5]bgcolor[#080808BB;true]background[5,5;1,1;gui_formbg.png;true]listcolors[#000000
2017-08-05 16:47:47: ERROR[Main]: 9;#5A5A5A;#141318;#30434C;#FFF]list[current_player;main;0,4.7;8,1;]list[current_player;main;0,5.85;8,3;8]image[0,4.7;1,1;gui_hb_bg.png]image[1,4.7;1,1;gui_hb_bg.png]image[2,4.7;1,1;gui_hb_bg.png]image[3,4.7;1,1;gui_hb_bg.png]image[4,4.7;1,1;gui_hb_bg.png]
2017-08-05 16:47:47: ERROR[Main]: mage[5,4.7;1,1;gui_hb_bg.png]image[6,4.7;1,1;gui_hb_bg.png]image[7,4.7;1,1;gui_hb_bg.png], register_on_update:function: 0x419ae378, inv_mod:unified_inventory, registered_callbacks:table: 0x419ae258, register_on_unequip:function: 0x419ad518}
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = userdata: 0x41de4190
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = string: "Xio"
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = userdata: 0x40eb8000
2017-08-05 16:47:47: ERROR[Main]: (*temporary) = string: "attempt to call method 'update_inventory' (a nil value)"
```
:large_orange_diamond:
|
non_defect
|
server crash error servererror asyncerr serverthread run lua runtime error from mod in callback detached inventory onmove servers minetest games ugxrealms mods clothing init lua attempt to call method update inventory a nil value error stack traceback error error lua function anonymous at file home minetestservers minetest games ugxrealms mods clothing init lua best guess error local variables error inv userdata error from list string clothing error from index number error to list string clothing error to index number error count number error player userdata error plaver inv userdata error stack userdata error temporary nil error temporary table run callbacks function set player clothing function register on equip function formspec size bgcolor background listcolors error fff list list image image image image image error mage image image register on update function inv mod unified inventory registered callbacks table register on unequip function error temporary userdata error temporary string xio error temporary userdata error temporary string attempt to call method update inventory a nil value large orange diamond
| 0
|
198,850
| 22,674,164,974
|
IssuesEvent
|
2022-07-04 01:23:30
|
Thezone1975/tabliss
|
https://api.github.com/repos/Thezone1975/tabliss
|
opened
|
CVE-2022-25758 (Medium) detected in scss-tokenizer-0.2.3.tgz
|
security vulnerability
|
## CVE-2022-25758 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>scss-tokenizer-0.2.3.tgz</b></p></summary>
<p>A tokenzier for Sass' SCSS syntax</p>
<p>Library home page: <a href="https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz">https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz</a></p>
<p>Path to dependency file: /tabliss/package.json</p>
<p>Path to vulnerable library: /node_modules/scss-tokenizer/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.12.0.tgz (Root Library)
- sass-graph-2.2.4.tgz
- :x: **scss-tokenizer-0.2.3.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package scss-tokenizer are vulnerable to Regular Expression Denial of Service (ReDoS) via the loadAnnotation() function, due to the usage of insecure regex.
<p>Publish Date: 2022-07-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25758>CVE-2022-25758</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-25758">https://nvd.nist.gov/vuln/detail/CVE-2022-25758</a></p>
<p>Release Date: 2022-07-01</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-25758 (Medium) detected in scss-tokenizer-0.2.3.tgz - ## CVE-2022-25758 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>scss-tokenizer-0.2.3.tgz</b></p></summary>
<p>A tokenzier for Sass' SCSS syntax</p>
<p>Library home page: <a href="https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz">https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz</a></p>
<p>Path to dependency file: /tabliss/package.json</p>
<p>Path to vulnerable library: /node_modules/scss-tokenizer/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.12.0.tgz (Root Library)
- sass-graph-2.2.4.tgz
- :x: **scss-tokenizer-0.2.3.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package scss-tokenizer are vulnerable to Regular Expression Denial of Service (ReDoS) via the loadAnnotation() function, due to the usage of insecure regex.
<p>Publish Date: 2022-07-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25758>CVE-2022-25758</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-25758">https://nvd.nist.gov/vuln/detail/CVE-2022-25758</a></p>
<p>Release Date: 2022-07-01</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in scss tokenizer tgz cve medium severity vulnerability vulnerable library scss tokenizer tgz a tokenzier for sass scss syntax library home page a href path to dependency file tabliss package json path to vulnerable library node modules scss tokenizer package json dependency hierarchy node sass tgz root library sass graph tgz x scss tokenizer tgz vulnerable library vulnerability details all versions of package scss tokenizer are vulnerable to regular expression denial of service redos via the loadannotation function due to the usage of insecure regex publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution no fix step up your open source security game with mend
| 0
|
565,951
| 16,773,443,673
|
IssuesEvent
|
2021-06-14 17:36:18
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Explore solutions for running skaffold with minikube+containerd
|
kind/feature priority/important-longterm
|
Right now skaffold errors out if it detects that minikube is running with containerd
To solve this issue, we need to:
- Find a suitable docker-env alternative for continerd runtime
|
1.0
|
Explore solutions for running skaffold with minikube+containerd - Right now skaffold errors out if it detects that minikube is running with containerd
To solve this issue, we need to:
- Find a suitable docker-env alternative for continerd runtime
|
non_defect
|
explore solutions for running skaffold with minikube containerd right now skaffold errors out if it detects that minikube is running with containerd to solve this issue we need to find a suitable docker env alternative for continerd runtime
| 0
|
399,200
| 27,231,154,837
|
IssuesEvent
|
2023-02-21 13:20:44
|
felangel/bloc
|
https://api.github.com/repos/felangel/bloc
|
closed
|
Sharing blocs
|
documentation
|
Hi thanks for the great library.
I got a question related to sharing code when using bloc.
Let's imagine there is an app for desktop and mobile which got a feature. On mobile the feature is very basic. On desktop the same feature is more complex with additional functionality. That said what is the correct path to go from here. Either implementing 1 platform independent `Bloc` or 2 platform dependent `Bloc`s. Should `Bloc`s be platform independent or not in general ?
|
1.0
|
Sharing blocs - Hi thanks for the great library.
I got a question related to sharing code when using bloc.
Let's imagine there is an app for desktop and mobile which got a feature. On mobile the feature is very basic. On desktop the same feature is more complex with additional functionality. That said what is the correct path to go from here. Either implementing 1 platform independent `Bloc` or 2 platform dependent `Bloc`s. Should `Bloc`s be platform independent or not in general ?
|
non_defect
|
sharing blocs hi thanks for the great library i got a question related to sharing code when using bloc let s imagine there is an app for desktop and mobile which got a feature on mobile the feature is very basic on desktop the same feature is more complex with additional functionality that said what is the correct path to go from here either implementing platform independent bloc or platform dependent bloc s should bloc s be platform independent or not in general
| 0
|
65,504
| 8,816,890,296
|
IssuesEvent
|
2018-12-30 16:33:54
|
solid/solid-auth-client
|
https://api.github.com/repos/solid/solid-auth-client
|
closed
|
Explain details of fetch function
|
documentation
|
The readme states `The fetch method mimics the browser's fetch API.` Does this mean that the input and output are exactly the same as the browser's fetch API?
|
1.0
|
Explain details of fetch function - The readme states `The fetch method mimics the browser's fetch API.` Does this mean that the input and output are exactly the same as the browser's fetch API?
|
non_defect
|
explain details of fetch function the readme states the fetch method mimics the browser s fetch api does this mean that the input and output are exactly the same as the browser s fetch api
| 0
|
6,931
| 2,610,318,146
|
IssuesEvent
|
2015-02-26 19:42:41
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Gameplay Error
|
auto-migrated Priority-Medium Type-Defect
|
```
So, I finally reached their tech level. But discovered that they are only
buildable on Courscant. (The Clone wars GC).
Are they not supposed to be buildable on Kuat, Mon Cal, etc.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 5 May 2011 at 11:31
|
1.0
|
Gameplay Error - ```
So, I finally reached their tech level. But discovered that they are only
buildable on Courscant. (The Clone wars GC).
Are they not supposed to be buildable on Kuat, Mon Cal, etc.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 5 May 2011 at 11:31
|
defect
|
gameplay error so i finally reached their tech level but discovered that they are only buildable on courscant the clone wars gc are they not supposed to be buildable on kuat mon cal etc original issue reported on code google com by gmail com on may at
| 1
|
27,312
| 4,963,625,839
|
IssuesEvent
|
2016-12-03 10:16:46
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
Database connection refused for certain database names
|
defect
|
I have set up a new Contao 4.2.4 installation. After running the install tool it asks again for the database credentials. The input fields are already pre-filled with the values from the `parameters.yml` config file. However, Contao refuses to connect to the database as long as the database name is built according to this scheme: `foo_example.com`. Only after changing the database name to e.g. `foo_example` I could establish the database connection.
|
1.0
|
Database connection refused for certain database names - I have set up a new Contao 4.2.4 installation. After running the install tool it asks again for the database credentials. The input fields are already pre-filled with the values from the `parameters.yml` config file. However, Contao refuses to connect to the database as long as the database name is built according to this scheme: `foo_example.com`. Only after changing the database name to e.g. `foo_example` I could establish the database connection.
|
defect
|
database connection refused for certain database names i have set up a new contao installation after running the install tool it asks again for the database credentials the input fields are already pre filled with the values from the parameters yml config file however contao refuses to connect to the database as long as the database name is built according to this scheme foo example com only after changing the database name to e g foo example i could establish the database connection
| 1
|
61,198
| 17,023,633,182
|
IssuesEvent
|
2021-07-03 03:01:33
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Embeddable HTML has "&" instead of "&"
|
Component: admin Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 4.30pm, Friday, 10th September 2010]**
The embeddable HTML generated by the Export tab isn't valid (although it works) because shows literal "&" instead of "&".
The error message shown by http://validator.w3.org refers for an explanation to http://www.htmlhelp.com/tools/validator/problems.html#amp
|
1.0
|
Embeddable HTML has "&" instead of "&" - **[Submitted to the original trac issue database at 4.30pm, Friday, 10th September 2010]**
The embeddable HTML generated by the Export tab isn't valid (although it works) because shows literal "&" instead of "&".
The error message shown by http://validator.w3.org refers for an explanation to http://www.htmlhelp.com/tools/validator/problems.html#amp
|
defect
|
embeddable html has instead of amp the embeddable html generated by the export tab isn t valid although it works because shows literal instead of amp the error message shown by refers for an explanation to
| 1
|
2,807
| 2,607,946,224
|
IssuesEvent
|
2015-02-26 00:33:23
|
chrsmithdemos/switchlist
|
https://api.github.com/repos/chrsmithdemos/switchlist
|
opened
|
SwitchList should always open the last data file.
|
auto-migrated Priority-Medium Type-Defect
|
```
Most people using SwitchList will only have one layout file. Currently, when
launching SwitchList, it always opens a new empty SwitchList. Instead,
SwitchList should open the last layout file.
Workaround for now: double-click on the layout file instead of the program.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 3 Jan 2013 at 11:44
|
1.0
|
SwitchList should always open the last data file. - ```
Most people using SwitchList will only have one layout file. Currently, when
launching SwitchList, it always opens a new empty SwitchList. Instead,
SwitchList should open the last layout file.
Workaround for now: double-click on the layout file instead of the program.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 3 Jan 2013 at 11:44
|
defect
|
switchlist should always open the last data file most people using switchlist will only have one layout file currently when launching switchlist it always opens a new empty switchlist instead switchlist should open the last layout file workaround for now double click on the layout file instead of the program original issue reported on code google com by rwbowdi gmail com on jan at
| 1
|
481,856
| 13,893,109,291
|
IssuesEvent
|
2020-10-19 13:07:24
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Build: Eco Staging build failing to create new default world
|
Category: DevOps Priority: High Status: Fixed
|
It seems like sometimes the Eco Staging build fails to save the world when it's generating a new one to replace DefaultWorld
|
1.0
|
Build: Eco Staging build failing to create new default world - It seems like sometimes the Eco Staging build fails to save the world when it's generating a new one to replace DefaultWorld
|
non_defect
|
build eco staging build failing to create new default world it seems like sometimes the eco staging build fails to save the world when it s generating a new one to replace defaultworld
| 0
|
8,786
| 2,612,069,310
|
IssuesEvent
|
2015-02-27 11:28:45
|
rbei-etas/busmaster
|
https://api.github.com/repos/rbei-etas/busmaster
|
closed
|
LIN frame type(Master, Slave) radio buttons are getting disabled whenever we switch between Hex and Dec numeric format
|
1.3 patch (defect) 3.3 low priority (EC3)
|
LIN frame type(Master, Slave) radio buttons in Tx window are getting disabled whenever we switch between Hex and Dec numeric format.
1. Open LIN Tx window. Master and Slave radio buttons will be enabled.
2. Change the numeric format(Hex or Dec)
Now the radio buttons gets disabled.
v2.5
|
1.0
|
LIN frame type(Master, Slave) radio buttons are getting disabled whenever we switch between Hex and Dec numeric format - LIN frame type(Master, Slave) radio buttons in Tx window are getting disabled whenever we switch between Hex and Dec numeric format.
1. Open LIN Tx window. Master and Slave radio buttons will be enabled.
2. Change the numeric format(Hex or Dec)
Now the radio buttons gets disabled.
v2.5
|
defect
|
lin frame type master slave radio buttons are getting disabled whenever we switch between hex and dec numeric format lin frame type master slave radio buttons in tx window are getting disabled whenever we switch between hex and dec numeric format open lin tx window master and slave radio buttons will be enabled change the numeric format hex or dec now the radio buttons gets disabled
| 1
|
75,316
| 25,767,515,962
|
IssuesEvent
|
2022-12-09 04:00:13
|
vector-im/element-call
|
https://api.github.com/repos/vector-im/element-call
|
opened
|
Visual Bug on Element client in call
|
T-Defect
|
### Steps to reproduce
1. Start a call.
2. You can drag the window around a bit and it glitches out.
### Outcome
It glitches out.
https://user-images.githubusercontent.com/56714680/206621151-8d10fa80-32e7-466b-82c3-ea00e8a9c773.mp4
### Operating system
Windows 11 Pro
### Browser information
Element Desktop Nightly
### URL for webapp
_No response_
### Will you send logs?
No
|
1.0
|
Visual Bug on Element client in call - ### Steps to reproduce
1. Start a call.
2. You can drag the window around a bit and it glitches out.
### Outcome
It glitches out.
https://user-images.githubusercontent.com/56714680/206621151-8d10fa80-32e7-466b-82c3-ea00e8a9c773.mp4
### Operating system
Windows 11 Pro
### Browser information
Element Desktop Nightly
### URL for webapp
_No response_
### Will you send logs?
No
|
defect
|
visual bug on element client in call steps to reproduce start a call you can drag the window around a bit and it glitches out outcome it glitches out operating system windows pro browser information element desktop nightly url for webapp no response will you send logs no
| 1
|
60,151
| 14,711,089,767
|
IssuesEvent
|
2021-01-05 06:45:18
|
towavephone/GatsbyBlog
|
https://api.github.com/repos/towavephone/GatsbyBlog
|
opened
|
基于arcgis地图组件的搭建部署
|
/arcgis-map-component-build-deploy/ Gitalk
|
/arcgis-map-component-build-deploy/需求背景 基于公司的要求,需要对地图组件做出选型,以支持在地图上展示线路轨迹 技术选型 选型 优点 缺点 百度地图 大厂支持、UI比较美观、API文档较为清楚 内网搭建访问较为困难 高德地图 大厂支持、UI比较美观、API文档较为清楚 内网搭建访问较为困难 echarts…
|
1.0
|
基于arcgis地图组件的搭建部署 - /arcgis-map-component-build-deploy/需求背景 基于公司的要求,需要对地图组件做出选型,以支持在地图上展示线路轨迹 技术选型 选型 优点 缺点 百度地图 大厂支持、UI比较美观、API文档较为清楚 内网搭建访问较为困难 高德地图 大厂支持、UI比较美观、API文档较为清楚 内网搭建访问较为困难 echarts…
|
non_defect
|
基于arcgis地图组件的搭建部署 arcgis map component build deploy 需求背景 基于公司的要求,需要对地图组件做出选型,以支持在地图上展示线路轨迹 技术选型 选型 优点 缺点 百度地图 大厂支持、ui比较美观、api文档较为清楚 内网搭建访问较为困难 高德地图 大厂支持、ui比较美观、api文档较为清楚 内网搭建访问较为困难 echarts…
| 0
|
276,553
| 20,988,807,928
|
IssuesEvent
|
2022-03-29 07:21:56
|
dgbowl/dgpost
|
https://api.github.com/repos/dgbowl/dgpost
|
closed
|
Front page documentation
|
documentation
|
Create a front page for the documentation, linking to the following items:
- [ ] `recipe` schema description
- [ ] features in `recipes`: `load`, `extract`, `transform`, `save`
- [ ] user documentation for the `transform` module
- [ ] developer documentation for the `transform` module
|
1.0
|
Front page documentation - Create a front page for the documentation, linking to the following items:
- [ ] `recipe` schema description
- [ ] features in `recipes`: `load`, `extract`, `transform`, `save`
- [ ] user documentation for the `transform` module
- [ ] developer documentation for the `transform` module
|
non_defect
|
front page documentation create a front page for the documentation linking to the following items recipe schema description features in recipes load extract transform save user documentation for the transform module developer documentation for the transform module
| 0
|
30,090
| 6,019,777,784
|
IssuesEvent
|
2017-06-07 15:09:02
|
Altium-Designer-addons/scripts-libraries
|
https://api.github.com/repos/Altium-Designer-addons/scripts-libraries
|
closed
|
Multiple PCBs in Single Project problem
|
auto-migrated Priority-Medium Type-Defect
|
```
I tried the MultiPCBProject V2.0 scripts on Altium V13.1.2,
but the "SCH_UpdateAllPCBDocuments" function does not work.
The compile mask is not placed over the respective blankets, resulting in all
components compiled onto all PCBs.
I watched the videos, and I think I did things correctly, but maybe there's an
Altium Preference setting that needs to be enabled.
```
Original issue reported on code.google.com by `rcd...@gmail.com` on 29 Oct 2013 at 3:52
|
1.0
|
Multiple PCBs in Single Project problem - ```
I tried the MultiPCBProject V2.0 scripts on Altium V13.1.2,
but the "SCH_UpdateAllPCBDocuments" function does not work.
The compile mask is not placed over the respective blankets, resulting in all
components compiled onto all PCBs.
I watched the videos, and I think I did things correctly, but maybe there's an
Altium Preference setting that needs to be enabled.
```
Original issue reported on code.google.com by `rcd...@gmail.com` on 29 Oct 2013 at 3:52
|
defect
|
multiple pcbs in single project problem i tried the multipcbproject scripts on altium but the sch updateallpcbdocuments function does not work the compile mask is not placed over the respective blankets resulting in all components compiled onto all pcbs i watched the videos and i think i did things correctly but maybe there s an altium preference setting that needs to be enabled original issue reported on code google com by rcd gmail com on oct at
| 1
|
28,223
| 5,221,387,976
|
IssuesEvent
|
2017-01-27 01:18:28
|
elTiempoVuela/https-finder
|
https://api.github.com/repos/elTiempoVuela/https-finder
|
closed
|
FTBFS: librdf error - property element 'localized' has multiple object node elements
|
auto-migrated Priority-Medium Type-Defect
|
```
The error was thrown during building a Debian package using 0.86 sources.
Please find the attached patch to fix the issue.
Thank you.
```
Original issue reported on code.google.com by `only...@gmail.com` on 19 Nov 2012 at 5:10
Attachments:
- [fix_localized.patch](https://storage.googleapis.com/google-code-attachments/https-finder/issue-64/comment-0/fix_localized.patch)
|
1.0
|
FTBFS: librdf error - property element 'localized' has multiple object node elements - ```
The error was thrown during building a Debian package using 0.86 sources.
Please find the attached patch to fix the issue.
Thank you.
```
Original issue reported on code.google.com by `only...@gmail.com` on 19 Nov 2012 at 5:10
Attachments:
- [fix_localized.patch](https://storage.googleapis.com/google-code-attachments/https-finder/issue-64/comment-0/fix_localized.patch)
|
defect
|
ftbfs librdf error property element localized has multiple object node elements the error was thrown during building a debian package using sources please find the attached patch to fix the issue thank you original issue reported on code google com by only gmail com on nov at attachments
| 1
|
43,187
| 11,544,208,255
|
IssuesEvent
|
2020-02-18 11:00:41
|
Security-Onion-Solutions/security-onion
|
https://api.github.com/repos/Security-Onion-Solutions/security-onion
|
closed
|
Web based configuration app - enhancement request
|
Priority-Medium Type-Defect auto-migrated
|
```
Proposed Features Include:
Rules
============================
1) Display and edit custom rules from /etc/nsm/rules/local.rules.
2) Ability to fire a rule by creating a packet that triggers a rule.
3) Ability to submit arbitrary data into the network to verify a rule works as
intended.
4) Display and edit bpf editor and syntax checker for excluding unwanted
traffic.
5) Automatically update message mapping file when snort restarts by default.
System configuration
============================
1) Add the ability to configure various max disk usage and retention of files.
If possible consolidate these parameters across the security onion suite.
2) Add the ability to configure syslog in the /etc/nsm/<sensor>/barnyard2.conf
file. e.g.
output alert_syslog_full: sensor_name CQ-IDS01-eth1, server 192.168.1.14,
protocol udp, port 514, operation_mode default
3) Ability to turn off and on all of the security onion services taking into
account dependencies. e.g. snort, barnyard, squert, etc.
4) Include pre-defined service selections that produce commonly used
configurations. e.g. Simple IDS: snort & barnyard only. IDS with web
interface: snort, barnyard, snorby.
5) Tweaking features, like cron frequency of various crons.
```
Original issue reported on code.google.com by `vortext...@gmail.com` on 5 Jun 2013 at 2:11
|
1.0
|
Web based configuration app - enhancement request - ```
Proposed Features Include:
Rules
============================
1) Display and edit custom rules from /etc/nsm/rules/local.rules.
2) Ability to fire a rule by creating a packet that triggers a rule.
3) Ability to submit arbitrary data into the network to verify a rule works as
intended.
4) Display and edit bpf editor and syntax checker for excluding unwanted
traffic.
5) Automatically update message mapping file when snort restarts by default.
System configuration
============================
1) Add the ability to configure various max disk usage and retention of files.
If possible consolidate these parameters across the security onion suite.
2) Add the ability to configure syslog in the /etc/nsm/<sensor>/barnyard2.conf
file. e.g.
output alert_syslog_full: sensor_name CQ-IDS01-eth1, server 192.168.1.14,
protocol udp, port 514, operation_mode default
3) Ability to turn off and on all of the security onion services taking into
account dependencies. e.g. snort, barnyard, squert, etc.
4) Include pre-defined service selections that produce commonly used
configurations. e.g. Simple IDS: snort & barnyard only. IDS with web
interface: snort, barnyard, snorby.
5) Tweaking features, like cron frequency of various crons.
```
Original issue reported on code.google.com by `vortext...@gmail.com` on 5 Jun 2013 at 2:11
|
defect
|
web based configuration app enhancement request proposed features include rules display and edit custom rules from etc nsm rules local rules ability to fire a rule by creating a packet that triggers a rule ability to submit arbitrary data into the network to verify a rule works as intended display and edit bpf editor and syntax checker for excluding unwanted traffic automatically update message mapping file when snort restarts by default system configuration add the ability to configure various max disk usage and retention of files if possible consolidate these parameters across the security onion suite add the ability to configure syslog in the etc nsm conf file e g output alert syslog full sensor name cq server protocol udp port operation mode default ability to turn off and on all of the security onion services taking into account dependencies e g snort barnyard squert etc include pre defined service selections that produce commonly used configurations e g simple ids snort barnyard only ids with web interface snort barnyard snorby tweaking features like cron frequency of various crons original issue reported on code google com by vortext gmail com on jun at
| 1
|
46,354
| 19,097,949,189
|
IssuesEvent
|
2021-11-29 18:47:32
|
microsoft/botbuilder-dotnet
|
https://api.github.com/repos/microsoft/botbuilder-dotnet
|
closed
|
Not able to parse DateTimeSpec value "the hour" while reading response from luis in bot app
|
bug blocked customer-reported Bot Services customer-replied-to
|
Ask
=======
May I know why "the hour" text could not be converted in DateTimeSpec type? Do I need to tweak something from my c# model side to allow it?
Package
==========
```<PackageReference Include="Microsoft.Bot.Builder.AI.Luis" Version="4.12.0" />```
Luis response JSON
==========================
```
{
"useralias": [
"abcd"
],
"number": [
2
],
"datetime": [
{
"timex": [
"(T14, T15,PT1H)"
],
"type": "timerange"
},
"the hour"
],
"personName": [
"Tony"
],
"$instance": {
"useralias": [
{
"type": "useralias",
"text": "abcd",
"startIndex": 0,
"endIndex": 10,
"modelType": "Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"number": [
{
"type": "builtin.number",
"text": "2",
"startIndex": 31,
"endIndex": 32,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"datetime": [
{
"type": "builtin.datetimeV2.timerange",
"text": "2 to 3pm",
"startIndex": 31,
"endIndex": 39,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
},
{
"type": "builtin.datetimeV2.duration",
"text": "the hour",
"startIndex": 99,
"endIndex": 107,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"personName": [
{
"type": "builtin.personName",
"text": "Tony",
"startIndex": 59,
"endIndex": 65,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
]
}
}
```
Error
==========
```
{"Error converting value \"the hour\" to type 'Microsoft.Bot.Builder.AI.Luis.DateTimeSpec'. Path 'entities.datetime[1]', line 1, position 362."}
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader reader, JsonArrayContract contract, JsonProperty containerProperty, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, Object existingValue, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonSerializer.Deserialize(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value)
at Luis.GeneralLuis.Convert(Object result)
at Microsoft.Bot.Builder.AI.Luis.LuisRecognizer.<RecognizeAsync>d__26`1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at OOFAssistant.Controllers.NotifyController.<PullAliasRecord>d__17.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at OOFAssistant.Controllers.NotifyController.<BotCallback>d__15.MoveNext()
```
Inner exception
====================
```
Could not cast or convert from System.String to Microsoft.Bot.Builder.AI.Luis.DateTimeSpec.
at Newtonsoft.Json.Utilities.ConvertUtils.EnsureTypeAssignable(Object value, Type initialType, Type targetType)
at Newtonsoft.Json.Utilities.ConvertUtils.ConvertOrCast(Object initialValue, CultureInfo culture, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
```
Convert logic
===========================
```
public void Convert(dynamic result)
{
var app = JsonConvert.DeserializeObject<GeneralLuis>(
JsonConvert.SerializeObject(
result,
new JsonSerializerSettings { NullValueHandling = NullValueHandling.Ignore, Error = OnError }
)
);
```
|
1.0
|
Not able to parse DateTimeSpec value "the hour" while reading response from luis in bot app - Ask
=======
May I know why "the hour" text could not be converted in DateTimeSpec type? Do I need to tweak something from my c# model side to allow it?
Package
==========
```<PackageReference Include="Microsoft.Bot.Builder.AI.Luis" Version="4.12.0" />```
Luis response JSON
==========================
```
{
"useralias": [
"abcd"
],
"number": [
2
],
"datetime": [
{
"timex": [
"(T14, T15,PT1H)"
],
"type": "timerange"
},
"the hour"
],
"personName": [
"Tony"
],
"$instance": {
"useralias": [
{
"type": "useralias",
"text": "abcd",
"startIndex": 0,
"endIndex": 10,
"modelType": "Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"number": [
{
"type": "builtin.number",
"text": "2",
"startIndex": 31,
"endIndex": 32,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"datetime": [
{
"type": "builtin.datetimeV2.timerange",
"text": "2 to 3pm",
"startIndex": 31,
"endIndex": 39,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
},
{
"type": "builtin.datetimeV2.duration",
"text": "the hour",
"startIndex": 99,
"endIndex": 107,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
],
"personName": [
{
"type": "builtin.personName",
"text": "Tony",
"startIndex": 59,
"endIndex": 65,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
]
}
]
}
}
```
Error
==========
```
{"Error converting value \"the hour\" to type 'Microsoft.Bot.Builder.AI.Luis.DateTimeSpec'. Path 'entities.datetime[1]', line 1, position 362."}
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader reader, JsonArrayContract contract, JsonProperty containerProperty, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, Object existingValue, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonSerializer.Deserialize(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value)
at Luis.GeneralLuis.Convert(Object result)
at Microsoft.Bot.Builder.AI.Luis.LuisRecognizer.<RecognizeAsync>d__26`1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at OOFAssistant.Controllers.NotifyController.<PullAliasRecord>d__17.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at OOFAssistant.Controllers.NotifyController.<BotCallback>d__15.MoveNext()
```
Inner exception
====================
```
Could not cast or convert from System.String to Microsoft.Bot.Builder.AI.Luis.DateTimeSpec.
at Newtonsoft.Json.Utilities.ConvertUtils.EnsureTypeAssignable(Object value, Type initialType, Type targetType)
at Newtonsoft.Json.Utilities.ConvertUtils.ConvertOrCast(Object initialValue, CultureInfo culture, Type targetType)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.EnsureType(JsonReader reader, Object value, CultureInfo culture, JsonContract contract, Type targetType)
```
Convert logic
===========================
```
public void Convert(dynamic result)
{
var app = JsonConvert.DeserializeObject<GeneralLuis>(
JsonConvert.SerializeObject(
result,
new JsonSerializerSettings { NullValueHandling = NullValueHandling.Ignore, Error = OnError }
)
);
```
|
non_defect
|
not able to parse datetimespec value the hour while reading response from luis in bot app ask may i know why the hour text could not be converted in datetimespec type do i need to tweak something from my c model side to allow it package luis response json useralias abcd number datetime timex type timerange the hour personname tony instance useralias type useralias text abcd startindex endindex modeltype entity extractor recognitionsources model number type builtin number text startindex endindex modeltype prebuilt entity extractor recognitionsources model datetime type builtin timerange text to startindex endindex modeltype prebuilt entity extractor recognitionsources model type builtin duration text the hour startindex endindex modeltype prebuilt entity extractor recognitionsources model personname type builtin personname text tony startindex endindex modeltype prebuilt entity extractor recognitionsources model error error converting value the hour to type microsoft bot builder ai luis datetimespec path entities datetime line position at newtonsoft json serialization jsonserializerinternalreader ensuretype jsonreader reader object value cultureinfo culture jsoncontract contract type targettype at newtonsoft json serialization jsonserializerinternalreader createvalueinternal jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader populatelist ilist list jsonreader reader jsonarraycontract contract jsonproperty containerproperty string id at newtonsoft json serialization jsonserializerinternalreader createlist jsonreader reader type objecttype jsoncontract contract jsonproperty member object existingvalue string id at newtonsoft json serialization jsonserializerinternalreader createvalueinternal jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader createvalueinternal jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader createvalueinternal jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader deserialize jsonreader reader type objecttype boolean checkadditionalcontent at newtonsoft json jsonserializer deserializeinternal jsonreader reader type objecttype at newtonsoft json jsonserializer deserialize jsonreader reader type objecttype at newtonsoft json jsonconvert deserializeobject string value type type jsonserializersettings settings at newtonsoft json jsonconvert deserializeobject string value jsonserializersettings settings at newtonsoft json jsonconvert deserializeobject string value at luis generalluis convert object result at microsoft bot builder ai luis luisrecognizer d movenext at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter getresult at oofassistant controllers notifycontroller d movenext at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter getresult at oofassistant controllers notifycontroller d movenext inner exception could not cast or convert from system string to microsoft bot builder ai luis datetimespec at newtonsoft json utilities convertutils ensuretypeassignable object value type initialtype type targettype at newtonsoft json utilities convertutils convertorcast object initialvalue cultureinfo culture type targettype at newtonsoft json serialization jsonserializerinternalreader ensuretype jsonreader reader object value cultureinfo culture jsoncontract contract type targettype convert logic public void convert dynamic result var app jsonconvert deserializeobject jsonconvert serializeobject result new jsonserializersettings nullvaluehandling nullvaluehandling ignore error onerror
| 0
|
125,857
| 16,848,173,772
|
IssuesEvent
|
2021-06-20 00:00:05
|
microsoft/fluentui
|
https://api.github.com/repos/microsoft/fluentui
|
closed
|
Label for SwatchColorPicker
|
Component: SwatchColorPicker Needs: Design Resolution: Soft Close Status: Not on Roadmap Type: Feature
|
#### Describe the feature that you would like added
Descriptive ``Label`` property for ``SwatchColorPicker``, just like [ChoiceGroup has a descriptive label](https://developer.microsoft.com/en-us/fabric#/controls/web/choicegroup#IChoiceGroupProps).
#### What component or utility would this be added to
[SwatchColorPicker](https://developer.microsoft.com/en-us/fabric#/controls/web/swatchcolorpicker)
|
1.0
|
Label for SwatchColorPicker - #### Describe the feature that you would like added
Descriptive ``Label`` property for ``SwatchColorPicker``, just like [ChoiceGroup has a descriptive label](https://developer.microsoft.com/en-us/fabric#/controls/web/choicegroup#IChoiceGroupProps).
#### What component or utility would this be added to
[SwatchColorPicker](https://developer.microsoft.com/en-us/fabric#/controls/web/swatchcolorpicker)
|
non_defect
|
label for swatchcolorpicker describe the feature that you would like added descriptive label property for swatchcolorpicker just like what component or utility would this be added to
| 0
|
10,332
| 12,309,457,067
|
IssuesEvent
|
2020-05-12 08:59:31
|
ahmedkaludi/schema-and-structured-data-for-wp
|
https://api.github.com/repos/ahmedkaludi/schema-and-structured-data-for-wp
|
closed
|
Compatibility with the EventON plugin.
|
3rd party compatibility [P: HIGH] enhancement
|
Need to make compatibility with the EventON plugin(https://www.myeventon.com/).
Ref: https://secure.helpscout.net/conversation/1070131850/108860?folderId=3257665
|
True
|
Compatibility with the EventON plugin. - Need to make compatibility with the EventON plugin(https://www.myeventon.com/).
Ref: https://secure.helpscout.net/conversation/1070131850/108860?folderId=3257665
|
non_defect
|
compatibility with the eventon plugin need to make compatibility with the eventon plugin ref
| 0
|
479,521
| 13,798,265,842
|
IssuesEvent
|
2020-10-10 00:42:47
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[docdb] Investigate raft add server fault tolerance
|
area/docdb priority/high
|
We've noticed a potential fault tolerance issue during expanding of raft groups. We should try to setup a repro and validate this scenario:
- a working 3 node RF3 quorum gets issued an ADD_SERVER ChangeConfig
- the 4th node is first added in as a PRE_VOTER while it does an RBS
- eventually the RBS finishes and we need to execute another ChangeConfig to bump it from PRE_VOTER to VOTER
In order to execute the initial ChangeConfig, we only need 2/3 nodes to be active, so one failed node would still allow the operation to go through. In order to execute the second ChangeConfig though, we need 3/4 nodes to be active, but the 4th node is not in VOTER state yet, so it's really 3/3 required votes.
So at this point, if any of the initial 3 peers is not responding (dead), then the ChangeConfig operation will not go through.
The scenario we've observed was that one of the first 3 nodes goes down in the time between the 2 ChangeConfig operations and is down for long enough that the node is deemed unavailable (default 15m). At that point, the leader logs would have been GCed so it would also try to execute a ChangeConfig to remove the failed follower. This would fail, as there's already one in progress...
At this point, the quorum will require a manual RBS of the failed follower, otherwise it cannot get the required 3 with only 2 live voters.
We should validate the above. It should be rather straightforward to setup a repro with a longer sleep after RBS and a server stop.
cc @mbautin @ttyusupov @spolitov
|
1.0
|
[docdb] Investigate raft add server fault tolerance - We've noticed a potential fault tolerance issue during expanding of raft groups. We should try to setup a repro and validate this scenario:
- a working 3 node RF3 quorum gets issued an ADD_SERVER ChangeConfig
- the 4th node is first added in as a PRE_VOTER while it does an RBS
- eventually the RBS finishes and we need to execute another ChangeConfig to bump it from PRE_VOTER to VOTER
In order to execute the initial ChangeConfig, we only need 2/3 nodes to be active, so one failed node would still allow the operation to go through. In order to execute the second ChangeConfig though, we need 3/4 nodes to be active, but the 4th node is not in VOTER state yet, so it's really 3/3 required votes.
So at this point, if any of the initial 3 peers is not responding (dead), then the ChangeConfig operation will not go through.
The scenario we've observed was that one of the first 3 nodes goes down in the time between the 2 ChangeConfig operations and is down for long enough that the node is deemed unavailable (default 15m). At that point, the leader logs would have been GCed so it would also try to execute a ChangeConfig to remove the failed follower. This would fail, as there's already one in progress...
At this point, the quorum will require a manual RBS of the failed follower, otherwise it cannot get the required 3 with only 2 live voters.
We should validate the above. It should be rather straightforward to setup a repro with a longer sleep after RBS and a server stop.
cc @mbautin @ttyusupov @spolitov
|
non_defect
|
investigate raft add server fault tolerance we ve noticed a potential fault tolerance issue during expanding of raft groups we should try to setup a repro and validate this scenario a working node quorum gets issued an add server changeconfig the node is first added in as a pre voter while it does an rbs eventually the rbs finishes and we need to execute another changeconfig to bump it from pre voter to voter in order to execute the initial changeconfig we only need nodes to be active so one failed node would still allow the operation to go through in order to execute the second changeconfig though we need nodes to be active but the node is not in voter state yet so it s really required votes so at this point if any of the initial peers is not responding dead then the changeconfig operation will not go through the scenario we ve observed was that one of the first nodes goes down in the time between the changeconfig operations and is down for long enough that the node is deemed unavailable default at that point the leader logs would have been gced so it would also try to execute a changeconfig to remove the failed follower this would fail as there s already one in progress at this point the quorum will require a manual rbs of the failed follower otherwise it cannot get the required with only live voters we should validate the above it should be rather straightforward to setup a repro with a longer sleep after rbs and a server stop cc mbautin ttyusupov spolitov
| 0
|
34,237
| 7,431,748,766
|
IssuesEvent
|
2018-03-25 17:43:45
|
Yahkal/replicaisland
|
https://api.github.com/repos/Yahkal/replicaisland
|
closed
|
Game Freezes
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. Hi,
Im getting android application not responding dialogs. Right after you click
"ADULT" and the female character walks to the names of the game creators it
says please wait and locks up and have to continual force close.
2.
3.
What is the expected output? What do you see instead?
Normal game play
What version of the product are you using? On what operating system?
Android 2.2
Please provide any additional information below.
```
Original issue reported on code.google.com by `jgrnetso...@gmail.com` on 15 Feb 2011 at 5:21
|
1.0
|
Game Freezes - ```
What steps will reproduce the problem?
1. Hi,
Im getting android application not responding dialogs. Right after you click
"ADULT" and the female character walks to the names of the game creators it
says please wait and locks up and have to continual force close.
2.
3.
What is the expected output? What do you see instead?
Normal game play
What version of the product are you using? On what operating system?
Android 2.2
Please provide any additional information below.
```
Original issue reported on code.google.com by `jgrnetso...@gmail.com` on 15 Feb 2011 at 5:21
|
defect
|
game freezes what steps will reproduce the problem hi im getting android application not responding dialogs right after you click adult and the female character walks to the names of the game creators it says please wait and locks up and have to continual force close what is the expected output what do you see instead normal game play what version of the product are you using on what operating system android please provide any additional information below original issue reported on code google com by jgrnetso gmail com on feb at
| 1
|
50,881
| 10,566,854,337
|
IssuesEvent
|
2019-10-05 21:59:53
|
comictagger/comictagger
|
https://api.github.com/repos/comictagger/comictagger
|
closed
|
Use unrar-cffi
|
code enhancement
|
Including the unrar source in the repo initially seemed a good idea but it introduced a lot of complexity both for the build and the user experience.
I started a new project [unrar-cffi](https://github.com/davide-romanini/unrar-cffi) with the aim to include it just as any other pip dependency without any other complications.
So this issue on comictagger will need to:
* include unrar-cffi dependency
* remove all the unrar related code (except for the `rar_exe` for writing that's still needed btw)
|
1.0
|
Use unrar-cffi - Including the unrar source in the repo initially seemed a good idea but it introduced a lot of complexity both for the build and the user experience.
I started a new project [unrar-cffi](https://github.com/davide-romanini/unrar-cffi) with the aim to include it just as any other pip dependency without any other complications.
So this issue on comictagger will need to:
* include unrar-cffi dependency
* remove all the unrar related code (except for the `rar_exe` for writing that's still needed btw)
|
non_defect
|
use unrar cffi including the unrar source in the repo initially seemed a good idea but it introduced a lot of complexity both for the build and the user experience i started a new project with the aim to include it just as any other pip dependency without any other complications so this issue on comictagger will need to include unrar cffi dependency remove all the unrar related code except for the rar exe for writing that s still needed btw
| 0
|
94,669
| 19,573,487,514
|
IssuesEvent
|
2022-01-04 12:53:32
|
Onelinerhub/onelinerhub
|
https://api.github.com/repos/Onelinerhub/onelinerhub
|
closed
|
Short solution needed: "Get key TTL in Redis" (python-redis)
|
help wanted good first issue code python-redis
|
Please help us write most modern and shortest code solution for this issue:
**Get key TTL in Redis** (technology: [python-redis](https://onelinerhub.com/python-redis))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.
|
1.0
|
Short solution needed: "Get key TTL in Redis" (python-redis) - Please help us write most modern and shortest code solution for this issue:
**Get key TTL in Redis** (technology: [python-redis](https://onelinerhub.com/python-redis))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.
|
non_defect
|
short solution needed get key ttl in redis python redis please help us write most modern and shortest code solution for this issue get key ttl in redis technology fast way just write the code solution in the comments prefered way create pull request with a new code file inside don t forget to use comments to make solution explained link to this issue in comments of pull request
| 0
|
281,932
| 30,889,011,385
|
IssuesEvent
|
2023-08-04 02:07:12
|
hshivhare67/kernel_v4.1.15_CVE-2019-10220
|
https://api.github.com/repos/hshivhare67/kernel_v4.1.15_CVE-2019-10220
|
reopened
|
CVE-2021-0707 (High) detected in linuxlinux-4.4.302
|
Mend: dependency security vulnerability
|
## CVE-2021-0707 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/dma-buf/dma-buf.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/dma-buf/dma-buf.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In dma_buf_release of dma-buf.c, there is a possible memory corruption due to a use after free. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-155756045References: Upstream kernel
<p>Publish Date: 2022-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0707>CVE-2021-0707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-0707">https://nvd.nist.gov/vuln/detail/CVE-2021-0707</a></p>
<p>Release Date: 2022-04-12</p>
<p>Fix Resolution: linux-libc-headers - 5.8;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-0707 (High) detected in linuxlinux-4.4.302 - ## CVE-2021-0707 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/dma-buf/dma-buf.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/dma-buf/dma-buf.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In dma_buf_release of dma-buf.c, there is a possible memory corruption due to a use after free. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-155756045References: Upstream kernel
<p>Publish Date: 2022-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0707>CVE-2021-0707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-0707">https://nvd.nist.gov/vuln/detail/CVE-2021-0707</a></p>
<p>Release Date: 2022-04-12</p>
<p>Fix Resolution: linux-libc-headers - 5.8;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers dma buf dma buf c drivers dma buf dma buf c vulnerability details in dma buf release of dma buf c there is a possible memory corruption due to a use after free this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
45,807
| 13,055,751,253
|
IssuesEvent
|
2020-07-30 02:37:41
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
Bad error message when reading in non .i3 file (Trac #78)
|
Incomplete Migration Migrated from Trac dataio defect
|
Migrated from https://code.icecube.wisc.edu/ticket/78
```json
{
"status": "closed",
"changetime": "2007-09-07T17:13:20",
"description": "I was using python's glob() in order to get a list of input files together. Unfortunately, a text file leaked into the list. I got this error message:\n/local/proth/work/jeb/V00-00-05-src/dataio/private/dataio/FrameIO.cxx:253: FATAL: Frame in file is version 1684107084, this software can read only up to 3\n\nFortunately, someone recognized that error and saved me lots of headaches. Maybe there could be a warning if a file you're reading doesn't end in \".i3\" or \".i3.gz\". Or maybe dataio could recognize a bad file format and give a meaningful error. I know for sure this issue exists in the latest release of offline-software, so forgive me if it's solved on the trunk already.",
"reporter": "proth",
"cc": "",
"resolution": "fixed",
"_ts": "1189185200000000",
"component": "dataio",
"summary": "Bad error message when reading in non .i3 file",
"priority": "major",
"keywords": "",
"time": "2007-07-18T16:35:31",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
1.0
|
Bad error message when reading in non .i3 file (Trac #78) - Migrated from https://code.icecube.wisc.edu/ticket/78
```json
{
"status": "closed",
"changetime": "2007-09-07T17:13:20",
"description": "I was using python's glob() in order to get a list of input files together. Unfortunately, a text file leaked into the list. I got this error message:\n/local/proth/work/jeb/V00-00-05-src/dataio/private/dataio/FrameIO.cxx:253: FATAL: Frame in file is version 1684107084, this software can read only up to 3\n\nFortunately, someone recognized that error and saved me lots of headaches. Maybe there could be a warning if a file you're reading doesn't end in \".i3\" or \".i3.gz\". Or maybe dataio could recognize a bad file format and give a meaningful error. I know for sure this issue exists in the latest release of offline-software, so forgive me if it's solved on the trunk already.",
"reporter": "proth",
"cc": "",
"resolution": "fixed",
"_ts": "1189185200000000",
"component": "dataio",
"summary": "Bad error message when reading in non .i3 file",
"priority": "major",
"keywords": "",
"time": "2007-07-18T16:35:31",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
defect
|
bad error message when reading in non file trac migrated from json status closed changetime description i was using python s glob in order to get a list of input files together unfortunately a text file leaked into the list i got this error message n local proth work jeb src dataio private dataio frameio cxx fatal frame in file is version this software can read only up to n nfortunately someone recognized that error and saved me lots of headaches maybe there could be a warning if a file you re reading doesn t end in or gz or maybe dataio could recognize a bad file format and give a meaningful error i know for sure this issue exists in the latest release of offline software so forgive me if it s solved on the trunk already reporter proth cc resolution fixed ts component dataio summary bad error message when reading in non file priority major keywords time milestone owner troy type defect
| 1
|
148,390
| 23,346,336,402
|
IssuesEvent
|
2022-08-09 18:21:19
|
TheSuperHackers/GeneralsGamePatch
|
https://api.github.com/repos/TheSuperHackers/GeneralsGamePatch
|
reopened
|
Emergency Repair Generals Power is very rarely used
|
Design Controversial Minor
|
Emergency Repair Generals Power is very rarely used across all 3 factions. It repairs vehicles in a selected area. Check if there can be anything done to make it more attractive to use.
## Stats
```
SpecialPower SuperweaponEmergencyRepair
ReloadTime = 240000
Object RepairVehiclesInArea_InvisibleMarker_Level1
HealingAmount = 100
Radius = 100
KindOf = VEHICLE
Object RepairVehiclesInArea_InvisibleMarker_Level2
HealingAmount = 200
Radius = 100
KindOf = VEHICLE
Object RepairVehiclesInArea_InvisibleMarker_Level3
HealingAmount = 300
Radius = 100
KindOf = VEHICLE
```
### Proposal 1
Decrease reload time.
### Proposal 2
Increase healing amount.
### Proposal 3
Increase radius.
### Proposal 4
Repair more kinds of things, such as STRUCTURE or INFANTRY.
### Proposal 5
Increase the radius. Make the repair effect linger for 10 seconds (with visible decal). Repair the units over time instead of in one burst. Upgrading the ability increases how long the effect lasts (20, 30 seconds?).
Example implementation: https://github.com/commy2/zerohour/commit/ea284287498fd7733bcaeefca2f099713bb65e4d
|
1.0
|
Emergency Repair Generals Power is very rarely used - Emergency Repair Generals Power is very rarely used across all 3 factions. It repairs vehicles in a selected area. Check if there can be anything done to make it more attractive to use.
## Stats
```
SpecialPower SuperweaponEmergencyRepair
ReloadTime = 240000
Object RepairVehiclesInArea_InvisibleMarker_Level1
HealingAmount = 100
Radius = 100
KindOf = VEHICLE
Object RepairVehiclesInArea_InvisibleMarker_Level2
HealingAmount = 200
Radius = 100
KindOf = VEHICLE
Object RepairVehiclesInArea_InvisibleMarker_Level3
HealingAmount = 300
Radius = 100
KindOf = VEHICLE
```
### Proposal 1
Decrease reload time.
### Proposal 2
Increase healing amount.
### Proposal 3
Increase radius.
### Proposal 4
Repair more kinds of things, such as STRUCTURE or INFANTRY.
### Proposal 5
Increase the radius. Make the repair effect linger for 10 seconds (with visible decal). Repair the units over time instead of in one burst. Upgrading the ability increases how long the effect lasts (20, 30 seconds?).
Example implementation: https://github.com/commy2/zerohour/commit/ea284287498fd7733bcaeefca2f099713bb65e4d
|
non_defect
|
emergency repair generals power is very rarely used emergency repair generals power is very rarely used across all factions it repairs vehicles in a selected area check if there can be anything done to make it more attractive to use stats specialpower superweaponemergencyrepair reloadtime object repairvehiclesinarea invisiblemarker healingamount radius kindof vehicle object repairvehiclesinarea invisiblemarker healingamount radius kindof vehicle object repairvehiclesinarea invisiblemarker healingamount radius kindof vehicle proposal decrease reload time proposal increase healing amount proposal increase radius proposal repair more kinds of things such as structure or infantry proposal increase the radius make the repair effect linger for seconds with visible decal repair the units over time instead of in one burst upgrading the ability increases how long the effect lasts seconds example implementation
| 0
|
254,368
| 19,211,362,086
|
IssuesEvent
|
2021-12-07 02:34:07
|
Jonathon22/TheSatoshiEra
|
https://api.github.com/repos/Jonathon22/TheSatoshiEra
|
opened
|
Create Milestones
|
documentation
|
## Feature Summary
Create Distinct milestones to follow
## Acceptance Criteria
make sure all tickets created are connected to a milestone
## Technical Requirements
- [ ]
- [ ]
- [ ]
|
1.0
|
Create Milestones - ## Feature Summary
Create Distinct milestones to follow
## Acceptance Criteria
make sure all tickets created are connected to a milestone
## Technical Requirements
- [ ]
- [ ]
- [ ]
|
non_defect
|
create milestones feature summary create distinct milestones to follow acceptance criteria make sure all tickets created are connected to a milestone technical requirements
| 0
|
476,735
| 13,749,263,379
|
IssuesEvent
|
2020-10-06 10:13:35
|
mypyc/mypyc
|
https://api.github.com/repos/mypyc/mypyc
|
closed
|
Optimized tagged integer bitwise ops
|
priority-0-high speed
|
Bitwise ops such as `&` and `<<` on tagged integers are slow. Implement primitives for these operations:
* [x] `&`
* [x] `|`
* [x] `^`
* [x] `~`
* [x] `<<`
* [x] `>>`
* [ ] `bit_length()` (not essential but would be nice)
Once we have these, we can modify the dataflow analysis in mypyc to use bit sets represented as ints to speed things up.
#644 is related.
|
1.0
|
Optimized tagged integer bitwise ops - Bitwise ops such as `&` and `<<` on tagged integers are slow. Implement primitives for these operations:
* [x] `&`
* [x] `|`
* [x] `^`
* [x] `~`
* [x] `<<`
* [x] `>>`
* [ ] `bit_length()` (not essential but would be nice)
Once we have these, we can modify the dataflow analysis in mypyc to use bit sets represented as ints to speed things up.
#644 is related.
|
non_defect
|
optimized tagged integer bitwise ops bitwise ops such as and on tagged integers are slow implement primitives for these operations bit length not essential but would be nice once we have these we can modify the dataflow analysis in mypyc to use bit sets represented as ints to speed things up is related
| 0
|
223,895
| 24,755,516,790
|
IssuesEvent
|
2022-10-21 17:19:20
|
opensearch-project/opensearch-build
|
https://api.github.com/repos/opensearch-project/opensearch-build
|
opened
|
CVE-2022-43403 (Medium) detected in script-security-1175.v4b_d517d6db_f0.jar
|
security vulnerability
|
## CVE-2022-43403 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>script-security-1175.v4b_d517d6db_f0.jar</b></p></summary>
<p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /ches/modules-2/files-2.1/org.jenkins-ci.plugins/script-security/1175.v4b_d517d6db_f0/8e7dcd1b5907e01427065eb986e9ea3c83579e52/script-security-1175.v4b_d517d6db_f0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jenkins-ci.plugins/script-security/1175.v4b_d517d6db_f0/8e7dcd1b5907e01427065eb986e9ea3c83579e52/script-security-1175.v4b_d517d6db_f0.jar</p>
<p>
Dependency Hierarchy:
- :x: **script-security-1175.v4b_d517d6db_f0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A sandbox bypass vulnerability involving casting an array-like value to an array type in Jenkins Script Security Plugin 1183.v774b_0b_0a_a_451 and earlier allows attackers with permission to define and run sandboxed scripts, including Pipelines, to bypass the sandbox protection and execute arbitrary code in the context of the Jenkins controller JVM.
<p>Publish Date: 2022-10-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-43403>CVE-2022-43403</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2022-10-19/#SECURITY-2824%20(1)">https://www.jenkins.io/security/advisory/2022-10-19/#SECURITY-2824%20(1)</a></p>
<p>Release Date: 2022-10-19</p>
<p>Fix Resolution: org.jenkins-ci.plugins:script-security:1184.v85d16b_d851b_3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2022-43403 (Medium) detected in script-security-1175.v4b_d517d6db_f0.jar - ## CVE-2022-43403 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>script-security-1175.v4b_d517d6db_f0.jar</b></p></summary>
<p>Allows Jenkins administrators to control what in-process scripts can be run by less-privileged users.</p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /ches/modules-2/files-2.1/org.jenkins-ci.plugins/script-security/1175.v4b_d517d6db_f0/8e7dcd1b5907e01427065eb986e9ea3c83579e52/script-security-1175.v4b_d517d6db_f0.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.jenkins-ci.plugins/script-security/1175.v4b_d517d6db_f0/8e7dcd1b5907e01427065eb986e9ea3c83579e52/script-security-1175.v4b_d517d6db_f0.jar</p>
<p>
Dependency Hierarchy:
- :x: **script-security-1175.v4b_d517d6db_f0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A sandbox bypass vulnerability involving casting an array-like value to an array type in Jenkins Script Security Plugin 1183.v774b_0b_0a_a_451 and earlier allows attackers with permission to define and run sandboxed scripts, including Pipelines, to bypass the sandbox protection and execute arbitrary code in the context of the Jenkins controller JVM.
<p>Publish Date: 2022-10-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-43403>CVE-2022-43403</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2022-10-19/#SECURITY-2824%20(1)">https://www.jenkins.io/security/advisory/2022-10-19/#SECURITY-2824%20(1)</a></p>
<p>Release Date: 2022-10-19</p>
<p>Fix Resolution: org.jenkins-ci.plugins:script-security:1184.v85d16b_d851b_3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_defect
|
cve medium detected in script security jar cve medium severity vulnerability vulnerable library script security jar allows jenkins administrators to control what in process scripts can be run by less privileged users path to dependency file build gradle path to vulnerable library ches modules files org jenkins ci plugins script security script security jar home wss scanner gradle caches modules files org jenkins ci plugins script security script security jar dependency hierarchy x script security jar vulnerable library found in base branch main vulnerability details a sandbox bypass vulnerability involving casting an array like value to an array type in jenkins script security plugin a and earlier allows attackers with permission to define and run sandboxed scripts including pipelines to bypass the sandbox protection and execute arbitrary code in the context of the jenkins controller jvm publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci plugins script security check this box to open an automated fix pr
| 0
|
19,697
| 3,248,211,530
|
IssuesEvent
|
2015-10-17 03:52:32
|
jimradford/superputty
|
https://api.github.com/repos/jimradford/superputty
|
closed
|
Cant use on a computer with user join domain.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Superputyy cant open with the tab. putty open with new window
What version of the product are you using? On what operating system?
it's stable version in download page. i use window 7 32 bit with join domain
user
Please provide any additional information below.
with my private laptop it's Ok. but with my company password. i'm working with join domain user. i run superputyy and ssh to the server but it open a new window of putty not a tab of supperputty.
Please guide me to fix that.
```
Original issue reported on code.google.com by `kd02.min...@gmail.com` on 12 Jan 2015 at 2:28
Attachments:
* [issue with superputty.jpg](https://storage.googleapis.com/google-code-attachments/superputty/issue-492/comment-0/issue with superputty.jpg)
|
1.0
|
Cant use on a computer with user join domain. - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Superputyy cant open with the tab. putty open with new window
What version of the product are you using? On what operating system?
it's stable version in download page. i use window 7 32 bit with join domain
user
Please provide any additional information below.
with my private laptop it's Ok. but with my company password. i'm working with join domain user. i run superputyy and ssh to the server but it open a new window of putty not a tab of supperputty.
Please guide me to fix that.
```
Original issue reported on code.google.com by `kd02.min...@gmail.com` on 12 Jan 2015 at 2:28
Attachments:
* [issue with superputty.jpg](https://storage.googleapis.com/google-code-attachments/superputty/issue-492/comment-0/issue with superputty.jpg)
|
defect
|
cant use on a computer with user join domain what steps will reproduce the problem what is the expected output what do you see instead superputyy cant open with the tab putty open with new window what version of the product are you using on what operating system it s stable version in download page i use window bit with join domain user please provide any additional information below with my private laptop it s ok but with my company password i m working with join domain user i run superputyy and ssh to the server but it open a new window of putty not a tab of supperputty please guide me to fix that original issue reported on code google com by min gmail com on jan at attachments with superputty jpg
| 1
|
85,732
| 7,987,849,827
|
IssuesEvent
|
2018-07-19 09:08:27
|
ampize/ampize
|
https://api.github.com/repos/ampize/ampize
|
closed
|
Page title
|
to test
|
Faire en sorte que les pages du cloud aient un titre, a minima 'AMPize.me' pour chacune.
Ajouter également une favicon (je m'occupe de produire l'image).
|
1.0
|
Page title - Faire en sorte que les pages du cloud aient un titre, a minima 'AMPize.me' pour chacune.
Ajouter également une favicon (je m'occupe de produire l'image).
|
non_defect
|
page title faire en sorte que les pages du cloud aient un titre a minima ampize me pour chacune ajouter également une favicon je m occupe de produire l image
| 0
|
763,162
| 26,746,573,294
|
IssuesEvent
|
2023-01-30 16:22:37
|
googleapis/python-bigquery-sqlalchemy
|
https://api.github.com/repos/googleapis/python-bigquery-sqlalchemy
|
closed
|
tests.sqlalchemy_dialect_compliance.test_dialect_compliance.ExpandingBoundInTest_bigquery+bigquery: many tests failed
|
type: bug priority: p1 flakybot: issue api: bigquery
|
Many tests failed at the same time in this package.
* I will close this issue when there are no more failures in this package _and_
there is at least one pass.
* No new issues will be filed for this package until this issue is closed.
* If there are already issues for individual test cases, I will close them when
the corresponding test passes. You can close them earlier, if you prefer, and
I won't reopen them while this issue is still open.
Here are the tests that failed:
* test_bound_in_heterogeneous_two_tuple_bindparam
* test_bound_in_heterogeneous_two_tuple_direct
* test_bound_in_heterogeneous_two_tuple_text_bindparam
* test_bound_in_heterogeneous_two_tuple_text_bindparam_non_tuple
* test_bound_in_heterogeneous_two_tuple_typed_bindparam_non_tuple
* test_bound_in_scalar_bindparam
* test_bound_in_scalar_direct
* test_bound_in_two_tuple_bindparam
* test_bound_in_two_tuple_direct
* test_empty_heterogeneous_tuples_bindparam
* test_empty_heterogeneous_tuples_direct
* test_empty_homogeneous_tuples_bindparam
* test_empty_homogeneous_tuples_direct
* test_empty_in_plus_notempty_notin
* test_empty_set_against_integer_bindparam
* test_empty_set_against_integer_direct
* test_empty_set_against_integer_negation_bindparam
* test_empty_set_against_integer_negation_direct
* test_empty_set_against_string_bindparam
* test_empty_set_against_string_direct
* test_empty_set_against_string_negation_bindparam
* test_empty_set_against_string_negation_direct
* test_multiple_empty_sets_bindparam
* test_multiple_empty_sets_direct
* test_nonempty_in_plus_empty_notin
* test_null_in_empty_set_is_false_bindparam
* test_null_in_empty_set_is_false_direct
* test_typed_str_in
* test_untyped_str_in
-----
commit: 7aa669600de33b64daaa18ccf46b161211fbf461
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d81f4614-b38a-4440-98a8-30e3ece615e5), [Sponge](http://sponge2/d81f4614-b38a-4440-98a8-30e3ece615e5)
status: failed
|
1.0
|
tests.sqlalchemy_dialect_compliance.test_dialect_compliance.ExpandingBoundInTest_bigquery+bigquery: many tests failed - Many tests failed at the same time in this package.
* I will close this issue when there are no more failures in this package _and_
there is at least one pass.
* No new issues will be filed for this package until this issue is closed.
* If there are already issues for individual test cases, I will close them when
the corresponding test passes. You can close them earlier, if you prefer, and
I won't reopen them while this issue is still open.
Here are the tests that failed:
* test_bound_in_heterogeneous_two_tuple_bindparam
* test_bound_in_heterogeneous_two_tuple_direct
* test_bound_in_heterogeneous_two_tuple_text_bindparam
* test_bound_in_heterogeneous_two_tuple_text_bindparam_non_tuple
* test_bound_in_heterogeneous_two_tuple_typed_bindparam_non_tuple
* test_bound_in_scalar_bindparam
* test_bound_in_scalar_direct
* test_bound_in_two_tuple_bindparam
* test_bound_in_two_tuple_direct
* test_empty_heterogeneous_tuples_bindparam
* test_empty_heterogeneous_tuples_direct
* test_empty_homogeneous_tuples_bindparam
* test_empty_homogeneous_tuples_direct
* test_empty_in_plus_notempty_notin
* test_empty_set_against_integer_bindparam
* test_empty_set_against_integer_direct
* test_empty_set_against_integer_negation_bindparam
* test_empty_set_against_integer_negation_direct
* test_empty_set_against_string_bindparam
* test_empty_set_against_string_direct
* test_empty_set_against_string_negation_bindparam
* test_empty_set_against_string_negation_direct
* test_multiple_empty_sets_bindparam
* test_multiple_empty_sets_direct
* test_nonempty_in_plus_empty_notin
* test_null_in_empty_set_is_false_bindparam
* test_null_in_empty_set_is_false_direct
* test_typed_str_in
* test_untyped_str_in
-----
commit: 7aa669600de33b64daaa18ccf46b161211fbf461
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d81f4614-b38a-4440-98a8-30e3ece615e5), [Sponge](http://sponge2/d81f4614-b38a-4440-98a8-30e3ece615e5)
status: failed
|
non_defect
|
tests sqlalchemy dialect compliance test dialect compliance expandingboundintest bigquery bigquery many tests failed many tests failed at the same time in this package i will close this issue when there are no more failures in this package and there is at least one pass no new issues will be filed for this package until this issue is closed if there are already issues for individual test cases i will close them when the corresponding test passes you can close them earlier if you prefer and i won t reopen them while this issue is still open here are the tests that failed test bound in heterogeneous two tuple bindparam test bound in heterogeneous two tuple direct test bound in heterogeneous two tuple text bindparam test bound in heterogeneous two tuple text bindparam non tuple test bound in heterogeneous two tuple typed bindparam non tuple test bound in scalar bindparam test bound in scalar direct test bound in two tuple bindparam test bound in two tuple direct test empty heterogeneous tuples bindparam test empty heterogeneous tuples direct test empty homogeneous tuples bindparam test empty homogeneous tuples direct test empty in plus notempty notin test empty set against integer bindparam test empty set against integer direct test empty set against integer negation bindparam test empty set against integer negation direct test empty set against string bindparam test empty set against string direct test empty set against string negation bindparam test empty set against string negation direct test multiple empty sets bindparam test multiple empty sets direct test nonempty in plus empty notin test null in empty set is false bindparam test null in empty set is false direct test typed str in test untyped str in commit buildurl status failed
| 0
|
118,928
| 4,757,711,014
|
IssuesEvent
|
2016-10-24 17:23:11
|
tfuqua/whitehouserolls
|
https://api.github.com/repos/tfuqua/whitehouserolls
|
opened
|
Scrolling animation for rolls
|
Desktop Function Low Priority Phone Tablet
|
### Location
Flying roll/ first section after the hero
### Description
I'd like to animate the flying roll so that it slides in to the right when you scroll down and slides backwards when you scroll up.
### Screenshot
|
1.0
|
Scrolling animation for rolls - ### Location
Flying roll/ first section after the hero
### Description
I'd like to animate the flying roll so that it slides in to the right when you scroll down and slides backwards when you scroll up.
### Screenshot
|
non_defect
|
scrolling animation for rolls location flying roll first section after the hero description i d like to animate the flying roll so that it slides in to the right when you scroll down and slides backwards when you scroll up screenshot
| 0
|
126,244
| 17,872,936,139
|
IssuesEvent
|
2021-09-06 19:13:06
|
Virinas-code/Indecrypt-2
|
https://api.github.com/repos/Virinas-code/Indecrypt-2
|
closed
|
CVE-2020-28282 (High) detected in getobject-0.1.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-28282 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>getobject-0.1.0.tgz</b></p></summary>
<p>get.and.set.deep.objects.easily = true</p>
<p>Library home page: <a href="https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz">https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz</a></p>
<p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/package.json</p>
<p>Path to vulnerable library: Indecrypt-2/static/jquery-ui-1.12.1.custom/node_modules/getobject/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **getobject-0.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Virinas-code/Indecrypt-2/commit/be5e35bc27ca92f0532d889bc304ace229cc56cc">be5e35bc27ca92f0532d889bc304ace229cc56cc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in 'getobject' version 0.1.0 allows an attacker to cause a denial of service and may lead to remote code execution.
<p>Publish Date: 2020-12-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28282>CVE-2020-28282</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/package/getobject">https://www.npmjs.com/package/getobject</a></p>
<p>Release Date: 2020-12-29</p>
<p>Fix Resolution: getobject - 1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28282 (High) detected in getobject-0.1.0.tgz - autoclosed - ## CVE-2020-28282 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>getobject-0.1.0.tgz</b></p></summary>
<p>get.and.set.deep.objects.easily = true</p>
<p>Library home page: <a href="https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz">https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz</a></p>
<p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/package.json</p>
<p>Path to vulnerable library: Indecrypt-2/static/jquery-ui-1.12.1.custom/node_modules/getobject/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **getobject-0.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Virinas-code/Indecrypt-2/commit/be5e35bc27ca92f0532d889bc304ace229cc56cc">be5e35bc27ca92f0532d889bc304ace229cc56cc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in 'getobject' version 0.1.0 allows an attacker to cause a denial of service and may lead to remote code execution.
<p>Publish Date: 2020-12-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28282>CVE-2020-28282</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/package/getobject">https://www.npmjs.com/package/getobject</a></p>
<p>Release Date: 2020-12-29</p>
<p>Fix Resolution: getobject - 1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in getobject tgz autoclosed cve high severity vulnerability vulnerable library getobject tgz get and set deep objects easily true library home page a href path to dependency file indecrypt static jquery ui custom package json path to vulnerable library indecrypt static jquery ui custom node modules getobject package json dependency hierarchy grunt tgz root library x getobject tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in getobject version allows an attacker to cause a denial of service and may lead to remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution getobject step up your open source security game with whitesource
| 0
|
6,341
| 14,268,585,441
|
IssuesEvent
|
2020-11-20 22:48:39
|
octue/octue-sdk-python
|
https://api.github.com/repos/octue/octue-sdk-python
|
closed
|
CLI raises TypeError when running an app from IDE
|
architecture decision needed dependencies devops experience (UX)
|
While running octue 0.1.3 app from IDE:
```
python app.py run
```
Traceback (most recent call last):
File "app.py", line 56, in <module>
octue_cli(args)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 1256, in invoke
Command.invoke(self, ctx)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
TypeError: octue_cli() missing 3 required positional arguments: 'data_dir', 'input_dir', and 'tmp_dir'
|
1.0
|
CLI raises TypeError when running an app from IDE - While running octue 0.1.3 app from IDE:
```
python app.py run
```
Traceback (most recent call last):
File "app.py", line 56, in <module>
octue_cli(args)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 1256, in invoke
Command.invoke(self, ctx)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/batman/Software/anaconda3/envs/foam_2d_twine/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
TypeError: octue_cli() missing 3 required positional arguments: 'data_dir', 'input_dir', and 'tmp_dir'
|
non_defect
|
cli raises typeerror when running an app from ide while running octue app from ide python app py run traceback most recent call last file app py line in octue cli args file home batman software envs foam twine lib site packages click core py line in call return self main args kwargs file home batman software envs foam twine lib site packages click core py line in main rv self invoke ctx file home batman software envs foam twine lib site packages click core py line in invoke command invoke self ctx file home batman software envs foam twine lib site packages click core py line in invoke return ctx invoke self callback ctx params file home batman software envs foam twine lib site packages click core py line in invoke return callback args kwargs file home batman software envs foam twine lib site packages click decorators py line in new func return f get current context args kwargs typeerror octue cli missing required positional arguments data dir input dir and tmp dir
| 0
|
21,926
| 3,587,215,055
|
IssuesEvent
|
2016-01-30 05:06:04
|
mash99/crypto-js
|
https://api.github.com/repos/mash99/crypto-js
|
closed
|
[IE8]Object doesn't support property or method
|
auto-migrated Priority-Medium Type-Defect
|
```
Hello,
when I try to generate sha256 in this way:
var sms = "123";
CryptoJS.SHA256(sms);
I`ve error in developer tools console:
Object doesn't support property or method
This problem is only in IE 8.I can`t find any solution in internet, Can anyone
help me?
```
Original issue reported on code.google.com by `k.lol...@gmail.com` on 29 Dec 2014 at 12:21
|
1.0
|
[IE8]Object doesn't support property or method - ```
Hello,
when I try to generate sha256 in this way:
var sms = "123";
CryptoJS.SHA256(sms);
I`ve error in developer tools console:
Object doesn't support property or method
This problem is only in IE 8.I can`t find any solution in internet, Can anyone
help me?
```
Original issue reported on code.google.com by `k.lol...@gmail.com` on 29 Dec 2014 at 12:21
|
defect
|
object doesn t support property or method hello when i try to generate in this way var sms cryptojs sms i ve error in developer tools console object doesn t support property or method this problem is only in ie i can t find any solution in internet can anyone help me original issue reported on code google com by k lol gmail com on dec at
| 1
|
45,025
| 12,521,516,577
|
IssuesEvent
|
2020-06-03 17:31:22
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
opened
|
Need to Compile LibMesh with LIBMESH_HAVE_XDR for KP-Bison Test(s)
|
T: defect
|
## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
Russell pointed out that a few tests were failing on CentOS in Docker with output similar to the following.
```
test:radial_power_factor.initBurnupRestart1: ERROR: Functionality is not available.
test:radial_power_factor.initBurnupRestart1: Make sure LIBMESH_HAVE_XDR is defined at build time
test:radial_power_factor.initBurnupRestart1: The XDR interface is not available in this installation
```
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I'm not sure if the tests in question are in BISON or not, but using a CentOS MOOSE image with BISON added on should result in messages about `LIBMESH_HAVE_XDR` if affected tests are in there.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
The only impact is some of our tests are failing when they shouldn't be. The solution I'll be putting forth in a PR is to have the following set before building LibMesh.
```
libmesh_CPPFLAGS="-D LIBMESH_HAVE_XDR"
```
|
1.0
|
Need to Compile LibMesh with LIBMESH_HAVE_XDR for KP-Bison Test(s) - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
Russell pointed out that a few tests were failing on CentOS in Docker with output similar to the following.
```
test:radial_power_factor.initBurnupRestart1: ERROR: Functionality is not available.
test:radial_power_factor.initBurnupRestart1: Make sure LIBMESH_HAVE_XDR is defined at build time
test:radial_power_factor.initBurnupRestart1: The XDR interface is not available in this installation
```
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I'm not sure if the tests in question are in BISON or not, but using a CentOS MOOSE image with BISON added on should result in messages about `LIBMESH_HAVE_XDR` if affected tests are in there.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
The only impact is some of our tests are failing when they shouldn't be. The solution I'll be putting forth in a PR is to have the following set before building LibMesh.
```
libmesh_CPPFLAGS="-D LIBMESH_HAVE_XDR"
```
|
defect
|
need to compile libmesh with libmesh have xdr for kp bison test s bug description russell pointed out that a few tests were failing on centos in docker with output similar to the following test radial power factor error functionality is not available test radial power factor make sure libmesh have xdr is defined at build time test radial power factor the xdr interface is not available in this installation steps to reproduce i m not sure if the tests in question are in bison or not but using a centos moose image with bison added on should result in messages about libmesh have xdr if affected tests are in there impact the only impact is some of our tests are failing when they shouldn t be the solution i ll be putting forth in a pr is to have the following set before building libmesh libmesh cppflags d libmesh have xdr
| 1
|
36,639
| 8,042,240,987
|
IssuesEvent
|
2018-07-31 07:27:23
|
OpenMS/OpenMS
|
https://api.github.com/repos/OpenMS/OpenMS
|
closed
|
OpenMS build fails
|
critical defect
|
Hi
I am trying to install openms on our server. When I try to buid XERCESC in the contrib folder, at one point it stops with a archive name prompt (Input archive name or "." to quit pax). If I exit the seem to continue with an end message "-- Configure and build OpenMS at your discretion!" (see below):
`
[jflucier@ip32 contrib-build]$ cmake -DBUILD_TYPE=XERCESC ../contrib
-- ADDRESSMODEL IS: 64 bit
-- BUILD_TYPE: XERCESC (one of: ALL;SEQAN;LIBSVM;XERCESC;BOOST;COINOR;BZIP2;ZLIB;GLPK;EIGEN;WILDMAGIC;SQLITE;KISSFFT)
-- FORCE_REBUILD: OFF
-- NUMBER_OF_JOBS: 4 (maximal number of concurrent compile jobs)
-- Downloading XERCES ..
-- Downloading XERCES .. skipped (already downloaded)
-- Validating archive for XERCES ..
-- Validating archive for XERCES .. done
-- Extracting XERCES ..
-- Extracting XERCES .. skipped (already exists)
-- Configuring XERCES-C library (./configure --prefix /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build --disable-network --disable-transcoder-icu --disable-shared --with-pic --enable-transcoder-gnuiconv --enable-transcoder-iconv CXX=/usr/bin/c++ CC=/usr/bin/cc) ..
ATTENTION! pax archive volume change required.
Ready for archive volume: 1
Input archive name or "." to quit pax.
Archive name > .
Quitting pax!
-- Configuring XERCES-C library (./configure --prefix /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build --disable-network --disable-transcoder-icu --disable-shared --with-pic --enable-transcoder-gnuiconv --enable-transcoder-iconv CXX=/usr/bin/c++ CC=/usr/bin/cc) .. done
-- Building XERCES-C library (make)..
-- Building XERCES-C library (make) .. done
-- Installing XERCES-C library (make install) ..
-- Installing XERCES-C library (make install) .. done
--
--
-- XERCESC has been built! Some parts of the contrib might still need (re)building.
-- Configure and build OpenMS at your discretion!
--
--
--
-- Configuring done
-- Generating done
-- Build files have been written to: /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build
`
Afterwards when I try to build OpenMS it fails:
`
[jflucier@ip32 OpenMS-build]$ cmake -DOPENMS_CONTRIB_LIBS="../contrib-build" -DBOOST_USE_STATIC=OFF -WITH_GUI=Off -HAS_XSERVER=Off ../
-- Building OpenMS 2.3.0
-- - Repository revision 38ae115
-- - Repository branch HEAD
-- - Repository last change date 2017-10-12 21:17:57 +0200
-- [CMake is tracking Git commits and branching. To disable use '-D GIT_TRACKING=OFF'.]
-- Intel TBB: OFF
-- OpenMP: ON
-- Architecture: 64 bit
-- Compiler checks for conversion: OFF
-- Collection of usage statistics and update notifications enabled.
-- If you don't want this information to be transmitted to our update sever, you can:
-- - Switch the build variable ENABLE_UPDATE_CHECK to OFF to remove the functionality at build time.
-- - Set the environment variable OPENMS_DISABLE_UPDATE_CHECK to disable the functionality at runtime.
-- Boost version: 1.61.0
-- Adding library OpenSwathAlgo
-- Adding library OpenSwathAlgo - SUCCESS
CMake Error at /cvmfs/opt.usherbrooke.ca/CentOS6/cmake/3.6.1/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
Could NOT find XercesC (missing: XercesC_LIBRARIES XercesC_INCLUDE_DIRS)
Call Stack (most recent call first):
/cvmfs/opt.usherbrooke.ca/CentOS6/cmake/3.6.1/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
cmake/modules/FindXercesC.cmake:80 (find_package_handle_standard_args)
src/openms/cmake_findExternalLibs.cmake:56 (find_package)
src/openms/CMakeLists.txt:63 (include)
-- Configuring incomplete, errors occurred!
See also "/home/xroucou_group/apps/OpenMS-2.3.0/OpenMS-build/CMakeFiles/CMakeOutput.log".
`
I have tried to set environament variable XercesC_LIBRARIES and XercesC_INCLUDE_DIRS to point to the contrib-build folder with no success:
`
[jflucier@ip32 OpenMS-build]$ export XercesC_LIBRARIES=../contrib-build/lib/
[jflucier@ip32 OpenMS-build]$ export XercesC_INCLUDE_DIRS=../contrib-build/include/xercesc/
`
I have looke dover the internet to solve the pax prompt and have not found solution yet. Thank in advance for your help.
JF
|
1.0
|
OpenMS build fails - Hi
I am trying to install openms on our server. When I try to buid XERCESC in the contrib folder, at one point it stops with a archive name prompt (Input archive name or "." to quit pax). If I exit the seem to continue with an end message "-- Configure and build OpenMS at your discretion!" (see below):
`
[jflucier@ip32 contrib-build]$ cmake -DBUILD_TYPE=XERCESC ../contrib
-- ADDRESSMODEL IS: 64 bit
-- BUILD_TYPE: XERCESC (one of: ALL;SEQAN;LIBSVM;XERCESC;BOOST;COINOR;BZIP2;ZLIB;GLPK;EIGEN;WILDMAGIC;SQLITE;KISSFFT)
-- FORCE_REBUILD: OFF
-- NUMBER_OF_JOBS: 4 (maximal number of concurrent compile jobs)
-- Downloading XERCES ..
-- Downloading XERCES .. skipped (already downloaded)
-- Validating archive for XERCES ..
-- Validating archive for XERCES .. done
-- Extracting XERCES ..
-- Extracting XERCES .. skipped (already exists)
-- Configuring XERCES-C library (./configure --prefix /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build --disable-network --disable-transcoder-icu --disable-shared --with-pic --enable-transcoder-gnuiconv --enable-transcoder-iconv CXX=/usr/bin/c++ CC=/usr/bin/cc) ..
ATTENTION! pax archive volume change required.
Ready for archive volume: 1
Input archive name or "." to quit pax.
Archive name > .
Quitting pax!
-- Configuring XERCES-C library (./configure --prefix /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build --disable-network --disable-transcoder-icu --disable-shared --with-pic --enable-transcoder-gnuiconv --enable-transcoder-iconv CXX=/usr/bin/c++ CC=/usr/bin/cc) .. done
-- Building XERCES-C library (make)..
-- Building XERCES-C library (make) .. done
-- Installing XERCES-C library (make install) ..
-- Installing XERCES-C library (make install) .. done
--
--
-- XERCESC has been built! Some parts of the contrib might still need (re)building.
-- Configure and build OpenMS at your discretion!
--
--
--
-- Configuring done
-- Generating done
-- Build files have been written to: /home/xroucou_group/apps/OpenMS-2.3.0/contrib-build
`
Afterwards when I try to build OpenMS it fails:
`
[jflucier@ip32 OpenMS-build]$ cmake -DOPENMS_CONTRIB_LIBS="../contrib-build" -DBOOST_USE_STATIC=OFF -WITH_GUI=Off -HAS_XSERVER=Off ../
-- Building OpenMS 2.3.0
-- - Repository revision 38ae115
-- - Repository branch HEAD
-- - Repository last change date 2017-10-12 21:17:57 +0200
-- [CMake is tracking Git commits and branching. To disable use '-D GIT_TRACKING=OFF'.]
-- Intel TBB: OFF
-- OpenMP: ON
-- Architecture: 64 bit
-- Compiler checks for conversion: OFF
-- Collection of usage statistics and update notifications enabled.
-- If you don't want this information to be transmitted to our update sever, you can:
-- - Switch the build variable ENABLE_UPDATE_CHECK to OFF to remove the functionality at build time.
-- - Set the environment variable OPENMS_DISABLE_UPDATE_CHECK to disable the functionality at runtime.
-- Boost version: 1.61.0
-- Adding library OpenSwathAlgo
-- Adding library OpenSwathAlgo - SUCCESS
CMake Error at /cvmfs/opt.usherbrooke.ca/CentOS6/cmake/3.6.1/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
Could NOT find XercesC (missing: XercesC_LIBRARIES XercesC_INCLUDE_DIRS)
Call Stack (most recent call first):
/cvmfs/opt.usherbrooke.ca/CentOS6/cmake/3.6.1/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
cmake/modules/FindXercesC.cmake:80 (find_package_handle_standard_args)
src/openms/cmake_findExternalLibs.cmake:56 (find_package)
src/openms/CMakeLists.txt:63 (include)
-- Configuring incomplete, errors occurred!
See also "/home/xroucou_group/apps/OpenMS-2.3.0/OpenMS-build/CMakeFiles/CMakeOutput.log".
`
I have tried to set environament variable XercesC_LIBRARIES and XercesC_INCLUDE_DIRS to point to the contrib-build folder with no success:
`
[jflucier@ip32 OpenMS-build]$ export XercesC_LIBRARIES=../contrib-build/lib/
[jflucier@ip32 OpenMS-build]$ export XercesC_INCLUDE_DIRS=../contrib-build/include/xercesc/
`
I have looke dover the internet to solve the pax prompt and have not found solution yet. Thank in advance for your help.
JF
|
defect
|
openms build fails hi i am trying to install openms on our server when i try to buid xercesc in the contrib folder at one point it stops with a archive name prompt input archive name or to quit pax if i exit the seem to continue with an end message configure and build openms at your discretion see below cmake dbuild type xercesc contrib addressmodel is bit build type xercesc one of all seqan libsvm xercesc boost coinor zlib glpk eigen wildmagic sqlite kissfft force rebuild off number of jobs maximal number of concurrent compile jobs downloading xerces downloading xerces skipped already downloaded validating archive for xerces validating archive for xerces done extracting xerces extracting xerces skipped already exists configuring xerces c library configure prefix home xroucou group apps openms contrib build disable network disable transcoder icu disable shared with pic enable transcoder gnuiconv enable transcoder iconv cxx usr bin c cc usr bin cc attention pax archive volume change required ready for archive volume input archive name or to quit pax archive name quitting pax configuring xerces c library configure prefix home xroucou group apps openms contrib build disable network disable transcoder icu disable shared with pic enable transcoder gnuiconv enable transcoder iconv cxx usr bin c cc usr bin cc done building xerces c library make building xerces c library make done installing xerces c library make install installing xerces c library make install done xercesc has been built some parts of the contrib might still need re building configure and build openms at your discretion configuring done generating done build files have been written to home xroucou group apps openms contrib build afterwards when i try to build openms it fails cmake dopenms contrib libs contrib build dboost use static off with gui off has xserver off building openms repository revision repository branch head repository last change date intel tbb off openmp on architecture bit compiler checks for conversion off collection of usage statistics and update notifications enabled if you don t want this information to be transmitted to our update sever you can switch the build variable enable update check to off to remove the functionality at build time set the environment variable openms disable update check to disable the functionality at runtime boost version adding library openswathalgo adding library openswathalgo success cmake error at cvmfs opt usherbrooke ca cmake share cmake modules findpackagehandlestandardargs cmake message could not find xercesc missing xercesc libraries xercesc include dirs call stack most recent call first cvmfs opt usherbrooke ca cmake share cmake modules findpackagehandlestandardargs cmake fphsa failure message cmake modules findxercesc cmake find package handle standard args src openms cmake findexternallibs cmake find package src openms cmakelists txt include configuring incomplete errors occurred see also home xroucou group apps openms openms build cmakefiles cmakeoutput log i have tried to set environament variable xercesc libraries and xercesc include dirs to point to the contrib build folder with no success export xercesc libraries contrib build lib export xercesc include dirs contrib build include xercesc i have looke dover the internet to solve the pax prompt and have not found solution yet thank in advance for your help jf
| 1
|
207,217
| 7,126,030,434
|
IssuesEvent
|
2018-01-20 04:36:13
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
Provide StaticResourceRequest for configuring WebFlux-based Security
|
priority: normal theme: security type: enhancement
|
As with #11022 we also need a version of `StaticResourceRequest` for WebFlux.
|
1.0
|
Provide StaticResourceRequest for configuring WebFlux-based Security - As with #11022 we also need a version of `StaticResourceRequest` for WebFlux.
|
non_defect
|
provide staticresourcerequest for configuring webflux based security as with we also need a version of staticresourcerequest for webflux
| 0
|
75,063
| 25,509,969,976
|
IssuesEvent
|
2022-11-28 12:22:48
|
BOINC/boinc
|
https://api.github.com/repos/BOINC/boinc
|
closed
|
Make BOINC urls case-insensitive
|
C: Client - Daemon P: Trivial R: invalid T: Defect Newbie
|
**Describe the problem**
When a user is given a URL to attach to a BOINC project, it will not work if the case is not correct. For example: boincproject.com/boinc would work but boincproject.com/BOINC would not. Domain names are case-insensitive, and I see no reason for case-sensitive URL handling. There is a chance that a user without the correct case-sensitive URL may try to attach to a project and therefore have issues doing so. Making URL handling case-insensitive would eliminate this source of error and confusion.
This is a further extension of https://github.com/BOINC/boinc/issues/4801
I realize this is a small issue and there are more important ones, just wanted to raise it as an idea.
**Describe the solution you'd like**
Make BOINC server URL handling case-insensitive
|
1.0
|
Make BOINC urls case-insensitive - **Describe the problem**
When a user is given a URL to attach to a BOINC project, it will not work if the case is not correct. For example: boincproject.com/boinc would work but boincproject.com/BOINC would not. Domain names are case-insensitive, and I see no reason for case-sensitive URL handling. There is a chance that a user without the correct case-sensitive URL may try to attach to a project and therefore have issues doing so. Making URL handling case-insensitive would eliminate this source of error and confusion.
This is a further extension of https://github.com/BOINC/boinc/issues/4801
I realize this is a small issue and there are more important ones, just wanted to raise it as an idea.
**Describe the solution you'd like**
Make BOINC server URL handling case-insensitive
|
defect
|
make boinc urls case insensitive describe the problem when a user is given a url to attach to a boinc project it will not work if the case is not correct for example boincproject com boinc would work but boincproject com boinc would not domain names are case insensitive and i see no reason for case sensitive url handling there is a chance that a user without the correct case sensitive url may try to attach to a project and therefore have issues doing so making url handling case insensitive would eliminate this source of error and confusion this is a further extension of i realize this is a small issue and there are more important ones just wanted to raise it as an idea describe the solution you d like make boinc server url handling case insensitive
| 1
|
32,543
| 6,822,230,621
|
IssuesEvent
|
2017-11-07 19:22:12
|
tpfinal-pp1/tp-final
|
https://api.github.com/repos/tpfinal-pp1/tp-final
|
closed
|
Bug de citas a medianoche
|
bug Defecto crítico Defecto medio
|
si la cita empieza entre las 23 y las 24, al no cambiarse la fecha y solo la hora de fin la fecha de fin queda invalida (es menor a la fecha de inicio) ACTUALIZACIÓN DE ERROR: 
|
2.0
|
Bug de citas a medianoche - si la cita empieza entre las 23 y las 24, al no cambiarse la fecha y solo la hora de fin la fecha de fin queda invalida (es menor a la fecha de inicio) ACTUALIZACIÓN DE ERROR: 
|
defect
|
bug de citas a medianoche si la cita empieza entre las y las al no cambiarse la fecha y solo la hora de fin la fecha de fin queda invalida es menor a la fecha de inicio actualización de error
| 1
|
74,134
| 24,963,239,555
|
IssuesEvent
|
2022-11-01 17:10:47
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
DataTable: Not compatible with MyFaces' component tree manipulation
|
:lady_beetle: defect
|
### Describe the bug
Our custom library utilizes the PF datatable in a very dynamic manner. Columns are populated dynamically based on runtime configurations. Everything worked fine with Mojarra, but because of other bugs, I tried MyFaces as the JSF implementation.
MyFaces performs way better and everything works as expected, but therefore I got a new bug: The contents of the dynamic columns is duplicated on view restore (e.g. pagination).
After digging around, the root cause is probably not MyFaces but PF itself.
During `DefaultFaceletsStateManagementStrategy#saveView` (MyFaces class) all dynamically created components are stored to the view state for restore at a later time. In `#saveStateOnMapVisitTree` a tree visit is started which tries to collect those dynamic components and if one is found, the state is serialized (including the children) and `VisitResult.REJECT` is returned, so the tree-visit does not visit the children again (see [here](https://github.com/apache/myfaces/blob/2.2.x/impl/src/main/java/org/apache/myfaces/view/facelets/DefaultFaceletsStateManagementStrategy.java#L1066) for the MyFaces source code).
But PF does not respect that `VisitResult.REJECT` in UIData. The problematic lines of code are https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/component/api/UIData.java#L771 and https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/component/api/UIData.java#L897.
The first code stops the processing of the children, as only the UIColumn itself should be visited at that place. But the result 'REJECT' is never stored. During the second code, the column content is visited which clearly is not expected, as the save-state-logic of MyFaces previously rejected the processing of the children of the column.
### Reproducer
https://github.com/fanste/primefaces-test/tree/test-9171
Just use the paginator and have a look at the second column.
### Expected behavior
_No response_
### PrimeFaces edition
Community
### PrimeFaces version
11.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.2.15
### Java version
8
### Browser(s)
_No response_
|
1.0
|
DataTable: Not compatible with MyFaces' component tree manipulation - ### Describe the bug
Our custom library utilizes the PF datatable in a very dynamic manner. Columns are populated dynamically based on runtime configurations. Everything worked fine with Mojarra, but because of other bugs, I tried MyFaces as the JSF implementation.
MyFaces performs way better and everything works as expected, but therefore I got a new bug: The contents of the dynamic columns is duplicated on view restore (e.g. pagination).
After digging around, the root cause is probably not MyFaces but PF itself.
During `DefaultFaceletsStateManagementStrategy#saveView` (MyFaces class) all dynamically created components are stored to the view state for restore at a later time. In `#saveStateOnMapVisitTree` a tree visit is started which tries to collect those dynamic components and if one is found, the state is serialized (including the children) and `VisitResult.REJECT` is returned, so the tree-visit does not visit the children again (see [here](https://github.com/apache/myfaces/blob/2.2.x/impl/src/main/java/org/apache/myfaces/view/facelets/DefaultFaceletsStateManagementStrategy.java#L1066) for the MyFaces source code).
But PF does not respect that `VisitResult.REJECT` in UIData. The problematic lines of code are https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/component/api/UIData.java#L771 and https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/component/api/UIData.java#L897.
The first code stops the processing of the children, as only the UIColumn itself should be visited at that place. But the result 'REJECT' is never stored. During the second code, the column content is visited which clearly is not expected, as the save-state-logic of MyFaces previously rejected the processing of the children of the column.
### Reproducer
https://github.com/fanste/primefaces-test/tree/test-9171
Just use the paginator and have a look at the second column.
### Expected behavior
_No response_
### PrimeFaces edition
Community
### PrimeFaces version
11.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.2.15
### Java version
8
### Browser(s)
_No response_
|
defect
|
datatable not compatible with myfaces component tree manipulation describe the bug our custom library utilizes the pf datatable in a very dynamic manner columns are populated dynamically based on runtime configurations everything worked fine with mojarra but because of other bugs i tried myfaces as the jsf implementation myfaces performs way better and everything works as expected but therefore i got a new bug the contents of the dynamic columns is duplicated on view restore e g pagination after digging around the root cause is probably not myfaces but pf itself during defaultfaceletsstatemanagementstrategy saveview myfaces class all dynamically created components are stored to the view state for restore at a later time in savestateonmapvisittree a tree visit is started which tries to collect those dynamic components and if one is found the state is serialized including the children and visitresult reject is returned so the tree visit does not visit the children again see for the myfaces source code but pf does not respect that visitresult reject in uidata the problematic lines of code are and the first code stops the processing of the children as only the uicolumn itself should be visited at that place but the result reject is never stored during the second code the column content is visited which clearly is not expected as the save state logic of myfaces previously rejected the processing of the children of the column reproducer just use the paginator and have a look at the second column expected behavior no response primefaces edition community primefaces version theme no response jsf implementation myfaces jsf version java version browser s no response
| 1
|
51,592
| 13,207,532,037
|
IssuesEvent
|
2020-08-14 23:28:37
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
Make cmake framework detection compatible with Xcode versions >= 4.3 (Trac #668)
|
Incomplete Migration Migrated from Trac defect tools/ports
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/668">https://code.icecube.wisc.edu/projects/icecube/ticket/668</a>, reported by claudio.kopperand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T19:01:17",
"_ts": "1351710077000000",
"description": "Since Xcode 4.3, frameworks are no longer stored in /Developer, but insider the Xcode application bundle. Cmake has support for this, but unfortunately, the detection code is missing a \"platform\" part in its detection code. These two patch files \n\n a. enhance the detection code to look in the correct place for newer Xcode versions, and\n a. uses the detected location when looking for frameworks \n\nThis will allow detection of the JavaVM (JNI) developer framework containing the correct header files. (The system framework is detected correctly, but it does not contain headers.)\n",
"reporter": "claudio.kopper",
"cc": "",
"resolution": "fixed",
"time": "2012-02-19T19:53:49",
"component": "tools/ports",
"summary": "Make cmake framework detection compatible with Xcode versions >= 4.3",
"priority": "normal",
"keywords": "cmake xcode mac os frameworks jni java javavm",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Make cmake framework detection compatible with Xcode versions >= 4.3 (Trac #668) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/668">https://code.icecube.wisc.edu/projects/icecube/ticket/668</a>, reported by claudio.kopperand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T19:01:17",
"_ts": "1351710077000000",
"description": "Since Xcode 4.3, frameworks are no longer stored in /Developer, but insider the Xcode application bundle. Cmake has support for this, but unfortunately, the detection code is missing a \"platform\" part in its detection code. These two patch files \n\n a. enhance the detection code to look in the correct place for newer Xcode versions, and\n a. uses the detected location when looking for frameworks \n\nThis will allow detection of the JavaVM (JNI) developer framework containing the correct header files. (The system framework is detected correctly, but it does not contain headers.)\n",
"reporter": "claudio.kopper",
"cc": "",
"resolution": "fixed",
"time": "2012-02-19T19:53:49",
"component": "tools/ports",
"summary": "Make cmake framework detection compatible with Xcode versions >= 4.3",
"priority": "normal",
"keywords": "cmake xcode mac os frameworks jni java javavm",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
make cmake framework detection compatible with xcode versions trac migrated from json status closed changetime ts description since xcode frameworks are no longer stored in developer but insider the xcode application bundle cmake has support for this but unfortunately the detection code is missing a platform part in its detection code these two patch files n n a enhance the detection code to look in the correct place for newer xcode versions and n a uses the detected location when looking for frameworks n nthis will allow detection of the javavm jni developer framework containing the correct header files the system framework is detected correctly but it does not contain headers n reporter claudio kopper cc resolution fixed time component tools ports summary make cmake framework detection compatible with xcode versions priority normal keywords cmake xcode mac os frameworks jni java javavm milestone owner nega type defect
| 1
|
35,755
| 5,006,085,548
|
IssuesEvent
|
2016-12-12 13:00:59
|
drbenvincent/delay-discounting-analysis
|
https://api.github.com/repos/drbenvincent/delay-discounting-analysis
|
opened
|
add tests to ensure HT_BayesFactor.m is working correctly
|
tests
|
Compare output of HT_BayesFactor with some results calculated analytically.
|
1.0
|
add tests to ensure HT_BayesFactor.m is working correctly - Compare output of HT_BayesFactor with some results calculated analytically.
|
non_defect
|
add tests to ensure ht bayesfactor m is working correctly compare output of ht bayesfactor with some results calculated analytically
| 0
|
40,200
| 9,905,935,199
|
IssuesEvent
|
2019-06-27 12:49:40
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Oracle: Fix DSL#extract() for ISO_DAY_OF_WEEK
|
C: DB: Oracle E: Enterprise Edition E: Professional Edition P: Medium T: Defect
|
The current implementation of `EXTRACT(ISO_DAY_OF_WEEK)` for Oracle seems to assume that `TO_CHAR(<f>, 'D')` always returns Sunday. According to the documentation (https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34924) this is relative to an initialization parameter:
> The datetime format element D returns the number of the day of the week (1-7). The day of the week that is numbered 1 is specified implicitly by the initialization parameter NLS_TERRITORY.
|
1.0
|
Oracle: Fix DSL#extract() for ISO_DAY_OF_WEEK - The current implementation of `EXTRACT(ISO_DAY_OF_WEEK)` for Oracle seems to assume that `TO_CHAR(<f>, 'D')` always returns Sunday. According to the documentation (https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements004.htm#i34924) this is relative to an initialization parameter:
> The datetime format element D returns the number of the day of the week (1-7). The day of the week that is numbered 1 is specified implicitly by the initialization parameter NLS_TERRITORY.
|
defect
|
oracle fix dsl extract for iso day of week the current implementation of extract iso day of week for oracle seems to assume that to char d always returns sunday according to the documentation this is relative to an initialization parameter the datetime format element d returns the number of the day of the week the day of the week that is numbered is specified implicitly by the initialization parameter nls territory
| 1
|
35,704
| 7,795,938,975
|
IssuesEvent
|
2018-06-08 09:47:39
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
gabls3_night; nightly plots and manual run data look different (Trac #205)
|
Migrated from Trac clubb_src defect ldgrant@uwm.edu
|
I have no idea what might cause this, but the gabls3_night data looks different for the nightly tests than it does when I run it myself and plot the data. The data from a manual run looks better than the data from the nightly tests.
I am attaching four images, two each from a manual run and the nightly tests. The fields thlm, rtm, wpthlp, and rtp look better for the manual run (as can be seen by comparing the first two attachments), and when comparing the second two sets of attachments, um and vm are much closer to the LES data at the domain top for the manual run.
Attachments:
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_manual-run.1.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_nightly-test.1.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_manual-run.2.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_nightly-test.2.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_comparison_of_stats_tout.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_comparison_of_stats_t_out.2.png
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/205
```json
{
"status": "closed",
"changetime": "2010-04-11T22:17:23",
"description": "I have no idea what might cause this, but the gabls3_night data looks different for the nightly tests than it does when I run it myself and plot the data. The data from a manual run looks better than the data from the nightly tests. \n\nI am attaching four images, two each from a manual run and the nightly tests. The fields thlm, rtm, wpthlp, and rtp look better for the manual run (as can be seen by comparing the first two attachments), and when comparing the second two sets of attachments, um and vm are much closer to the LES data at the domain top for the manual run.",
"reporter": "ldgrant@uwm.edu",
"cc": "senkbeil@uwm.edu, vlarson@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1271024243000000",
"component": "clubb_src",
"summary": "gabls3_night; nightly plots and manual run data look different",
"priority": "major",
"keywords": "",
"time": "2009-08-26T17:28:32",
"milestone": "",
"owner": "ldgrant@uwm.edu",
"type": "defect"
}
```
|
1.0
|
gabls3_night; nightly plots and manual run data look different (Trac #205) - I have no idea what might cause this, but the gabls3_night data looks different for the nightly tests than it does when I run it myself and plot the data. The data from a manual run looks better than the data from the nightly tests.
I am attaching four images, two each from a manual run and the nightly tests. The fields thlm, rtm, wpthlp, and rtp look better for the manual run (as can be seen by comparing the first two attachments), and when comparing the second two sets of attachments, um and vm are much closer to the LES data at the domain top for the manual run.
Attachments:
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_manual-run.1.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_nightly-test.1.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_manual-run.2.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_nightly-test.2.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_comparison_of_stats_tout.png
http://carson.math.uwm.edu/trac/clubb/attachment/ticket/205/gabls3_night_comparison_of_stats_t_out.2.png
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/205
```json
{
"status": "closed",
"changetime": "2010-04-11T22:17:23",
"description": "I have no idea what might cause this, but the gabls3_night data looks different for the nightly tests than it does when I run it myself and plot the data. The data from a manual run looks better than the data from the nightly tests. \n\nI am attaching four images, two each from a manual run and the nightly tests. The fields thlm, rtm, wpthlp, and rtp look better for the manual run (as can be seen by comparing the first two attachments), and when comparing the second two sets of attachments, um and vm are much closer to the LES data at the domain top for the manual run.",
"reporter": "ldgrant@uwm.edu",
"cc": "senkbeil@uwm.edu, vlarson@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1271024243000000",
"component": "clubb_src",
"summary": "gabls3_night; nightly plots and manual run data look different",
"priority": "major",
"keywords": "",
"time": "2009-08-26T17:28:32",
"milestone": "",
"owner": "ldgrant@uwm.edu",
"type": "defect"
}
```
|
defect
|
night nightly plots and manual run data look different trac i have no idea what might cause this but the night data looks different for the nightly tests than it does when i run it myself and plot the data the data from a manual run looks better than the data from the nightly tests i am attaching four images two each from a manual run and the nightly tests the fields thlm rtm wpthlp and rtp look better for the manual run as can be seen by comparing the first two attachments and when comparing the second two sets of attachments um and vm are much closer to the les data at the domain top for the manual run attachments migrated from json status closed changetime description i have no idea what might cause this but the night data looks different for the nightly tests than it does when i run it myself and plot the data the data from a manual run looks better than the data from the nightly tests n ni am attaching four images two each from a manual run and the nightly tests the fields thlm rtm wpthlp and rtp look better for the manual run as can be seen by comparing the first two attachments and when comparing the second two sets of attachments um and vm are much closer to the les data at the domain top for the manual run reporter ldgrant uwm edu cc senkbeil uwm edu vlarson uwm edu resolution verified by v larson ts component clubb src summary night nightly plots and manual run data look different priority major keywords time milestone owner ldgrant uwm edu type defect
| 1
|
142,862
| 11,497,580,644
|
IssuesEvent
|
2020-02-12 10:18:14
|
LIBCAS/ARCLib
|
https://api.github.com/repos/LIBCAS/ARCLib
|
closed
|
chyby při editaci AIP XML
|
bug to test
|
Nahlášeno na schůzi. Potvrzujeme možné chybové scénáře, vyžaduje další investigaci.
|
1.0
|
chyby při editaci AIP XML - Nahlášeno na schůzi. Potvrzujeme možné chybové scénáře, vyžaduje další investigaci.
|
non_defect
|
chyby při editaci aip xml nahlášeno na schůzi potvrzujeme možné chybové scénáře vyžaduje další investigaci
| 0
|
162,813
| 6,176,305,103
|
IssuesEvent
|
2017-07-01 12:21:46
|
eustasy/midori-browser.org
|
https://api.github.com/repos/eustasy/midori-browser.org
|
closed
|
Consolidate the elementary and official logos.
|
awaiting contribution Priority: Low Status: Confirmed Type: Image
|
Simply put, regardless of on-going desires for refreshing the logo, I'm not completely happy with the elementary-style Midori logo and this needs to be dealt with properly rather than using different logos in different places and confusing everyone in the process.
The design needs to be revised until the elementary logo and the official logo are one and the same.
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/357727-consolidate-the-elementary-and-official-logos?utm_campaign=plugin&utm_content=tracker%2F86907&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F86907&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Consolidate the elementary and official logos. - Simply put, regardless of on-going desires for refreshing the logo, I'm not completely happy with the elementary-style Midori logo and this needs to be dealt with properly rather than using different logos in different places and confusing everyone in the process.
The design needs to be revised until the elementary logo and the official logo are one and the same.
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/357727-consolidate-the-elementary-and-official-logos?utm_campaign=plugin&utm_content=tracker%2F86907&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F86907&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_defect
|
consolidate the elementary and official logos simply put regardless of on going desires for refreshing the logo i m not completely happy with the elementary style midori logo and this needs to be dealt with properly rather than using different logos in different places and confusing everyone in the process the design needs to be revised until the elementary logo and the official logo are one and the same want to back this issue we accept bounties via
| 0
|
266,998
| 28,485,836,337
|
IssuesEvent
|
2023-04-18 07:48:44
|
Satheesh575555/openSSL_1.0.1g
|
https://api.github.com/repos/Satheesh575555/openSSL_1.0.1g
|
opened
|
CVE-2015-0292 (High) detected in opensslOpenSSL_1_0_1g
|
Mend: dependency security vulnerability
|
## CVE-2015-0292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1g</b></p></summary>
<p>
<p>TLS/SSL and crypto library</p>
<p>Library home page: <a href=https://github.com/openssl/openssl.git>https://github.com/openssl/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/openSSL_1.0.1g/commit/7a1521d6faa1c1b2bda3237d82c41b77511b2861">7a1521d6faa1c1b2bda3237d82c41b77511b2861</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/evp/encode.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/evp/encode.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Integer underflow in the EVP_DecodeUpdate function in crypto/evp/encode.c in the base64-decoding implementation in OpenSSL before 0.9.8za, 1.0.0 before 1.0.0m, and 1.0.1 before 1.0.1h allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via crafted base64 data that triggers a buffer overflow.
<p>Publish Date: 2015-03-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0292>CVE-2015-0292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-0292">https://nvd.nist.gov/vuln/detail/CVE-2015-0292</a></p>
<p>Release Date: 2015-03-19</p>
<p>Fix Resolution: 0.9.8za,1.0.0m,1.0.1h</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-0292 (High) detected in opensslOpenSSL_1_0_1g - ## CVE-2015-0292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1g</b></p></summary>
<p>
<p>TLS/SSL and crypto library</p>
<p>Library home page: <a href=https://github.com/openssl/openssl.git>https://github.com/openssl/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/openSSL_1.0.1g/commit/7a1521d6faa1c1b2bda3237d82c41b77511b2861">7a1521d6faa1c1b2bda3237d82c41b77511b2861</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/evp/encode.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/evp/encode.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Integer underflow in the EVP_DecodeUpdate function in crypto/evp/encode.c in the base64-decoding implementation in OpenSSL before 0.9.8za, 1.0.0 before 1.0.0m, and 1.0.1 before 1.0.1h allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via crafted base64 data that triggers a buffer overflow.
<p>Publish Date: 2015-03-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0292>CVE-2015-0292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-0292">https://nvd.nist.gov/vuln/detail/CVE-2015-0292</a></p>
<p>Release Date: 2015-03-19</p>
<p>Fix Resolution: 0.9.8za,1.0.0m,1.0.1h</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in opensslopenssl cve high severity vulnerability vulnerable library opensslopenssl tls ssl and crypto library library home page a href found in head commit a href found in base branch main vulnerable source files crypto evp encode c crypto evp encode c vulnerability details integer underflow in the evp decodeupdate function in crypto evp encode c in the decoding implementation in openssl before before and before allows remote attackers to cause a denial of service memory corruption or possibly have unspecified other impact via crafted data that triggers a buffer overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
12,645
| 2,712,177,273
|
IssuesEvent
|
2015-04-09 12:09:04
|
xgenvn/android-vnc-server
|
https://api.github.com/repos/xgenvn/android-vnc-server
|
closed
|
[PATCH] Running vncserver on beagleboard/0xdroid
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi,
Attached patch fixed running vncserver on beagleboard/0xdroid and possibly
any device without a touch screen. Because faketouch screen always report
zero when query device information, coordinates transformation is not
needed.
```
Original issue reported on code.google.com by `ckanru` on 15 Jan 2010 at 9:40
Attachments:
* [beagleboard_faketouchscreen.diff](https://storage.googleapis.com/google-code-attachments/android-vnc-server/issue-8/comment-0/beagleboard_faketouchscreen.diff)
|
1.0
|
[PATCH] Running vncserver on beagleboard/0xdroid - ```
Hi,
Attached patch fixed running vncserver on beagleboard/0xdroid and possibly
any device without a touch screen. Because faketouch screen always report
zero when query device information, coordinates transformation is not
needed.
```
Original issue reported on code.google.com by `ckanru` on 15 Jan 2010 at 9:40
Attachments:
* [beagleboard_faketouchscreen.diff](https://storage.googleapis.com/google-code-attachments/android-vnc-server/issue-8/comment-0/beagleboard_faketouchscreen.diff)
|
defect
|
running vncserver on beagleboard hi attached patch fixed running vncserver on beagleboard and possibly any device without a touch screen because faketouch screen always report zero when query device information coordinates transformation is not needed original issue reported on code google com by ckanru on jan at attachments
| 1
|
684,628
| 23,425,192,407
|
IssuesEvent
|
2022-08-14 09:29:33
|
pvs-hd-tea/MuesliNext
|
https://api.github.com/repos/pvs-hd-tea/MuesliNext
|
closed
|
feat: implement rich text widget
|
Feature Request Low Priority
|
the rich text widget should be able to display formatted static text. The supported syntax should html, markdown or similar
|
1.0
|
feat: implement rich text widget - the rich text widget should be able to display formatted static text. The supported syntax should html, markdown or similar
|
non_defect
|
feat implement rich text widget the rich text widget should be able to display formatted static text the supported syntax should html markdown or similar
| 0
|
65,512
| 19,558,077,221
|
IssuesEvent
|
2022-01-03 12:36:01
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
SnapshotPhase2Operation exception on 5.0.1-SNAPSHOT
|
Type: Defect Source: Internal Team: SQL Module: Jet
|
Using Hazelcast Platform 5.0.1-SNAPSHOT (20211022 - aafdf51)
I get
```
16:31:08.499 WARN hz.naughty_cartwright.cached.thread-6 c.h.jet.impl.MasterSnapshotContext - [10.172.3.5]:5701 [grid] [5.0.1-SNAPSHOT] SnapshotPhase2Operation for snapshot 28 in job 'SlackSQLJob', execution 06fe-73a2-78c8-0001 failed on member: MemberInfo{address=[10.172.0.5]:5701, uuid=54de9756-8659-402c-b745-b8e628b2535d, liteMember=false, memberListJoinVersion=2}=com.hazelcast.jet.impl.exception.ExecutionNotFoundException: job 06fe-73a2-78c4-0001, execution 06fe-73a2-78c8-0001 not found for coordinator [10.172.3.5]:5701 for 'SnapshotPhase2Operation'
com.hazelcast.jet.impl.exception.ExecutionNotFoundException: job 06fe-73a2-78c4-0001, execution 06fe-73a2-78c8-0001 not found for coordinator [10.172.3.5]:5701 for 'SnapshotPhase2Operation'
at com.hazelcast.jet.impl.JobExecutionService.assertExecutionContext(JobExecutionService.java:441)
at com.hazelcast.jet.impl.operation.SnapshotPhase2Operation.doRun(SnapshotPhase2Operation.java:54)
at com.hazelcast.jet.impl.operation.AsyncOperation.run(AsyncOperation.java:54)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:469)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:197)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:137)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
```
I have a (huge!) reproducer project. DM me for details
|
1.0
|
SnapshotPhase2Operation exception on 5.0.1-SNAPSHOT - Using Hazelcast Platform 5.0.1-SNAPSHOT (20211022 - aafdf51)
I get
```
16:31:08.499 WARN hz.naughty_cartwright.cached.thread-6 c.h.jet.impl.MasterSnapshotContext - [10.172.3.5]:5701 [grid] [5.0.1-SNAPSHOT] SnapshotPhase2Operation for snapshot 28 in job 'SlackSQLJob', execution 06fe-73a2-78c8-0001 failed on member: MemberInfo{address=[10.172.0.5]:5701, uuid=54de9756-8659-402c-b745-b8e628b2535d, liteMember=false, memberListJoinVersion=2}=com.hazelcast.jet.impl.exception.ExecutionNotFoundException: job 06fe-73a2-78c4-0001, execution 06fe-73a2-78c8-0001 not found for coordinator [10.172.3.5]:5701 for 'SnapshotPhase2Operation'
com.hazelcast.jet.impl.exception.ExecutionNotFoundException: job 06fe-73a2-78c4-0001, execution 06fe-73a2-78c8-0001 not found for coordinator [10.172.3.5]:5701 for 'SnapshotPhase2Operation'
at com.hazelcast.jet.impl.JobExecutionService.assertExecutionContext(JobExecutionService.java:441)
at com.hazelcast.jet.impl.operation.SnapshotPhase2Operation.doRun(SnapshotPhase2Operation.java:54)
at com.hazelcast.jet.impl.operation.AsyncOperation.run(AsyncOperation.java:54)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:469)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:197)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:137)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
```
I have a (huge!) reproducer project. DM me for details
|
defect
|
exception on snapshot using hazelcast platform snapshot i get warn hz naughty cartwright cached thread c h jet impl mastersnapshotcontext for snapshot in job slacksqljob execution failed on member memberinfo address uuid litemember false memberlistjoinversion com hazelcast jet impl exception executionnotfoundexception job execution not found for coordinator for com hazelcast jet impl exception executionnotfoundexception job execution not found for coordinator for at com hazelcast jet impl jobexecutionservice assertexecutioncontext jobexecutionservice java at com hazelcast jet impl operation dorun java at com hazelcast jet impl operation asyncoperation run asyncoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java i have a huge reproducer project dm me for details
| 1
|
2,427
| 2,607,902,212
|
IssuesEvent
|
2015-02-26 00:14:04
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
opened
|
New action proposal: Smart tag dublicate
|
auto-migrated Priority-Medium Type-Defect
|
```
There is a little feature in the Zen HTML textmate bundle:
pressing "tab" after closing tag of LI element creates a new LI
right after on a new line.
I think in zen-coding must be a similar feature, but much more
smart:
1. We have some block of code like <li><a
href="#smth">something</a></li>.
2. We have a cursor somewhere on or near </li>.
3. We press a shortcut for proposed action.
4. The action creates a duplicate of the tag scope, but with
content and attributes replaced by "tab stops" like <li><a
href="$1">$2</a></li>$0
I think it can greatly improve and simplify copypasting etc.
```
-----
Original issue reported on code.google.com by `kizmarh` on 25 Feb 2010 at 4:44
|
1.0
|
New action proposal: Smart tag dublicate - ```
There is a little feature in the Zen HTML textmate bundle:
pressing "tab" after closing tag of LI element creates a new LI
right after on a new line.
I think in zen-coding must be a similar feature, but much more
smart:
1. We have some block of code like <li><a
href="#smth">something</a></li>.
2. We have a cursor somewhere on or near </li>.
3. We press a shortcut for proposed action.
4. The action creates a duplicate of the tag scope, but with
content and attributes replaced by "tab stops" like <li><a
href="$1">$2</a></li>$0
I think it can greatly improve and simplify copypasting etc.
```
-----
Original issue reported on code.google.com by `kizmarh` on 25 Feb 2010 at 4:44
|
defect
|
new action proposal smart tag dublicate there is a little feature in the zen html textmate bundle pressing tab after closing tag of li element creates a new li right after on a new line i think in zen coding must be a similar feature but much more smart we have some block of code like a href smth something we have a cursor somewhere on or near we press a shortcut for proposed action the action creates a duplicate of the tag scope but with content and attributes replaced by tab stops like a href i think it can greatly improve and simplify copypasting etc original issue reported on code google com by kizmarh on feb at
| 1
|
36,133
| 17,466,717,621
|
IssuesEvent
|
2021-08-06 17:59:05
|
fkk-cz/noire_vehicles
|
https://api.github.com/repos/fkk-cz/noire_vehicles
|
closed
|
2020 Infiniti Q60 Project Black S
|
performance issue
|
This car ingame struggles to stay above 145 on the highway when irl it does have the power and speed to go much faster, not asking for it to go 200mph or anything just a buff so that the car is where it should be in terms of acceleration and top speed.
Thanks
|
True
|
2020 Infiniti Q60 Project Black S - This car ingame struggles to stay above 145 on the highway when irl it does have the power and speed to go much faster, not asking for it to go 200mph or anything just a buff so that the car is where it should be in terms of acceleration and top speed.
Thanks
|
non_defect
|
infiniti project black s this car ingame struggles to stay above on the highway when irl it does have the power and speed to go much faster not asking for it to go or anything just a buff so that the car is where it should be in terms of acceleration and top speed thanks
| 0
|
13,373
| 2,754,669,097
|
IssuesEvent
|
2015-04-25 22:08:10
|
ariana-paris/support-tools
|
https://api.github.com/repos/ariana-paris/support-tools
|
closed
|
Improve reliability of issue migration
|
auto-migrated Component-ExporterTool Priority-High Type-Defect
|
```
There have been a lot of issues reported about migrations failing due to failed
issue migrations. Dumping all of those to this master issue.
The underlying problem is due to underlying problems in App Engine's urlfetch
API, and the fact that the Google Code Exporter aborts the migration if any
GitHub API calls fail. A 0.01% chance of failure across hundreds of issues,
across dozens of comments is a serious problem.
If you delete the GitHub repository and try again, there is a chance it will
work on the next try. We've been making small progress towards reliability
here; but the real fix is to retry GitHub API calls in the event of failure.
(And check if state changing requests like POSTs actually went through.)
```
Original issue reported on code.google.com by `chrsm...@google.com` on 24 Apr 2015 at 10:20
|
1.0
|
Improve reliability of issue migration - ```
There have been a lot of issues reported about migrations failing due to failed
issue migrations. Dumping all of those to this master issue.
The underlying problem is due to underlying problems in App Engine's urlfetch
API, and the fact that the Google Code Exporter aborts the migration if any
GitHub API calls fail. A 0.01% chance of failure across hundreds of issues,
across dozens of comments is a serious problem.
If you delete the GitHub repository and try again, there is a chance it will
work on the next try. We've been making small progress towards reliability
here; but the real fix is to retry GitHub API calls in the event of failure.
(And check if state changing requests like POSTs actually went through.)
```
Original issue reported on code.google.com by `chrsm...@google.com` on 24 Apr 2015 at 10:20
|
defect
|
improve reliability of issue migration there have been a lot of issues reported about migrations failing due to failed issue migrations dumping all of those to this master issue the underlying problem is due to underlying problems in app engine s urlfetch api and the fact that the google code exporter aborts the migration if any github api calls fail a chance of failure across hundreds of issues across dozens of comments is a serious problem if you delete the github repository and try again there is a chance it will work on the next try we ve been making small progress towards reliability here but the real fix is to retry github api calls in the event of failure and check if state changing requests like posts actually went through original issue reported on code google com by chrsm google com on apr at
| 1
|
33,793
| 27,818,133,383
|
IssuesEvent
|
2023-03-18 23:00:49
|
methinks82/devops-capstone-project
|
https://api.github.com/repos/methinks82/devops-capstone-project
|
closed
|
Set up development environment
|
infrastructure
|
**As a** developer
**I need** an environment in which to develop the app
**So that** the app has a consistent environment in which to be developed and tested
### Details and Assumptions
* [document what you know]
### Acceptance Criteria
gherkin
Given [some context]
When [certain action is taken]
Then [the outcome of action is observed]
|
1.0
|
Set up development environment - **As a** developer
**I need** an environment in which to develop the app
**So that** the app has a consistent environment in which to be developed and tested
### Details and Assumptions
* [document what you know]
### Acceptance Criteria
gherkin
Given [some context]
When [certain action is taken]
Then [the outcome of action is observed]
|
non_defect
|
set up development environment as a developer i need an environment in which to develop the app so that the app has a consistent environment in which to be developed and tested details and assumptions acceptance criteria gherkin given when then
| 0
|
73,805
| 24,809,939,735
|
IssuesEvent
|
2022-10-25 08:38:22
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]:
|
I-defect needs-triaging
|
### What happened?
I'm running a test on the Grid and I always get this Exception in the console.
I paseted the grid hub log in the Relevant log output section.
org.openqa.selenium.remote.http.ConnectionFailedException: Unable to establish websocket connection to http://10.169.54.25:4444/session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp
Build info: version: '4.5.2', revision: '702c64f787c'
System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '11.0.16'
Driver info: driver.version: unknown
at org.openqa.selenium.remote.http.netty.NettyWebSocket.<init>(NettyWebSocket.java:102)
at org.openqa.selenium.remote.http.netty.NettyWebSocket.lambda$create$3(NettyWebSocket.java:128)
at org.openqa.selenium.remote.http.netty.NettyClient.openSocket(NettyClient.java:107)
at org.openqa.selenium.devtools.Connection.<init>(Connection.java:77)
at org.openqa.selenium.devtools.SeleniumCdpConnection.<init>(SeleniumCdpConnection.java:34)
at org.openqa.selenium.devtools.SeleniumCdpConnection.lambda$create$0(SeleniumCdpConnection.java:56)
at java.base/java.util.Optional.map(Optional.java:265)
at org.openqa.selenium.devtools.SeleniumCdpConnection.create(SeleniumCdpConnection.java:54)
at org.openqa.selenium.devtools.SeleniumCdpConnection.create(SeleniumCdpConnection.java:47)
at org.openqa.selenium.devtools.DevToolsProvider.getImplementation(DevToolsProvider.java:50)
at org.openqa.selenium.devtools.DevToolsProvider.getImplementation(DevToolsProvider.java:31)
at org.openqa.selenium.remote.Augmenter.augment(Augmenter.java:186)
at org.openqa.selenium.remote.RemoteWebDriverBuilder.build(RemoteWebDriverBuilder.java:375)
### How can we reproduce the issue?
```shell
Run a test on the Grid with Firefox and Win 10 nodes
```
### Relevant log output
```shell
08:01:54.716 DEBUG [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "cfd62c6db2e414a70bdde42094f0195a","eventTime": 1666684914711825748,"eventName": "Session created by the Distributor","attributes": {"logger": "org.openqa.selenium.grid.distributor.local.LocalDistributor","request.payload": "[Capabilities {acceptInsecureCerts: true, proxy: {httpProxy: 10.163.24.235:443, noProxy: [.jsdelivr.net], proxyType: MANUAL, sslProxy: 10.163.24.235:443}}, Capabilities {acceptInsecureCerts: true, proxy: {httpProxy: 10.163.24.235:443, noProxy: [.jsdelivr.net], proxyType: manual, sslProxy: 10.163.24.235:443}}]","session.capabilities": "{\"acceptInsecureCerts\": true,\"browserName\": \"firefox\",\"browserVersion\": \"106.0.1\",\"moz:accessibilityChecks\": false,\"moz:buildID\": \"20221019185550\",\"moz:geckodriverVersion\": \"0.32.0\",\"moz:headless\": false,\"moz:platformVersion\": \"10.0\",\"moz:processID\": 7088,\"moz:profile\": \"C:\\\\Users\\\\testuser\\\\AppData\\\\Local\\\\Temp\\\\rust_mozprofileJSjQgX\",\"moz:shutdownTimeout\": 60000,\"moz:useNonSpecCompliantPointerOrigin\": false,\"moz:webdriverClick\": true,\"moz:windowless\": false,\"pageLoadStrategy\": \"normal\",\"platformName\": \"WINDOWS\",\"proxy\": {\"httpProxy\": \"10.163.24.235:443\",\"proxyType\": \"MANUAL\",\"noProxy\": [ \".jsdelivr.net\" ], \"sslProxy\": \"10.163.24.235:443\" }, \"se:bidi\": \"ws:\\u002f\\u002f10.169.54.25:4444\\u002fsession\\u002f251e675f-7847-44ea-ba81-873fec335bbb\\u002fse\\u002fbidi\", \"se:cdp\": \"ws:\\u002f\\u002f10.169.54.25:4444\\u002fsession\\u002f251e675f-7847-44ea-ba81-873fec335bbb\\u002fse\\u002fcdp\", \"setWindowRect\": true, \"strictFileInteractability\": false, \"timeouts\": { \"implicit\": 0, \"pageLoad\": 300000, \"script\": 30000 }, \"unhandledPromptBehavior\": \"dismiss and notify\" }\n","session.id": "251e675f-7847-44ea-ba81-873fec335bbb","session.uri": "http:\u002f\u002f10.169.54.29:5555"}}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=d26c94cc620bf820, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=sessionqueue.completed, kind=INTERNAL, startEpochNanos=1666684914716000000, endEpochNanos=1666684914716426302, attributes={}, totalAttributeCount=0, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=c41247787e40b782, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=sessionqueue.add_to_queue, kind=INTERNAL, startEpochNanos=1666684886448000000, endEpochNanos=1666684914716880703, attributes={}, totalAttributeCount=0, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=1d328f6d4e487b5e, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=distributor.poll_queue, kind=INTERNAL, startEpochNanos=1666684910515000000, endEpochNanos=1666684914716751864, attributes=AttributesMap{data={request.id=aa936248-b3bb-421e-bd81-e02beb4149f5}, capacity=128, totalAddedValues=1}, totalAttributeCount=1, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.718 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=00000000000000000000000000000000, spanId=0000000000000000, traceFlags=00, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=false}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=session_queue, kind=INTERNAL, startEpochNanos=1666684886394000000, endEpochNanos=1666684914717731217, attributes=AttributesMap{data={span.kind=server, http.target=/session, random.key=c6dd9b79-3119-43fc-aab8-57b6fba15e3f, http.method=POST, http.status_code=200}, capacity=128, totalAddedValues=5}, totalAttributeCount=5, events=[ImmutableEventData{name=HTTP request execution complete, attributes={http.flavor=1, http.handler_class="org.openqa.selenium.grid.sessionqueue.local.LocalNewSessionQueue", http.host="10.169.54.25:4444", http.method="POST", http.request_content_length="949", http.scheme="HTTP", http.status_code=200, http.target="/session", http.user_agent="selenium/4.5.2 (java windows)"}, epochNanos=1666684914717719817, totalAttributeCount=9}], totalRecordedEvents=1, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=OK, description=Kind: OK Description:}, hasEnded=true}
08:01:54.718 DEBUG [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "cfd62c6db2e414a70bdde42094f0195a","eventTime": 1666684914717719817,"eventName": "HTTP request execution complete","attributes": {"http.flavor": 1,"http.handler_class": "org.openqa.selenium.grid.sessionqueue.local.LocalNewSessionQueue","http.host": "10.169.54.25:4444","http.method": "POST","http.request_content_length": "949","http.scheme": "HTTP","http.status_code": 200,"http.target": "\u002fsession","http.user_agent": "selenium\u002f4.5.2 (java windows)"}}
08:01:54.809 DEBUG [DefaultChannelPool$IdleChannelDetector.run] - Entry count for : http://10.169.54.29:5555 : 1
08:01:55.693 DEBUG [LoggingHandler.channelRead] - [id: 0x61ef8f02, L:/0:0:0:0:0:0:0:0:4444] READ: [id: 0x657b5878, L:/10.169.54.25:4444 - R:/10.65.248.248:61601]
08:01:55.694 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x61ef8f02, L:/0:0:0:0:0:0:0:0:4444] READ COMPLETE
08:01:55.706 DEBUG [ThreadLocalRandom.newSeed] - -Dio.netty.initialSeedUniquifier: 0x98f85fa72f486043
08:01:55.710 DEBUG [NettyConnectListener.writeRequest] - Using new Channel '[id: 0xdb7be0a0, L:/10.169.54.25:60408 - R:/10.169.54.29:5555]' for 'GET' to '/session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp'
08:01:55.718 DEBUG [WebSocketHandler.handleRead] -
Request DefaultFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: EmptyByteBufBE)
GET /session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp HTTP/1.1
upgrade: websocket
connection: upgrade
sec-websocket-key: RdtHrLcuAHGzlSOBjI09MQ==
sec-websocket-version: 13
origin: http://10.169.54.29:5555
host: 10.169.54.29:5555
accept: */*
user-agent: AHC/2.1
Response DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 400 Bad Request
content-length: 15
08:01:55.718 WARN [ProxyWebsocketsIntoGrid$ForwardingListener.onError] - Error proxying websocket command
java.io.IOException: Invalid Status code=400 text=Bad Request
at org.asynchttpclient.netty.handler.WebSocketHandler.abort(WebSocketHandler.java:92)
at org.asynchttpclient.netty.handler.WebSocketHandler.handleRead(WebSocketHandler.java:118)
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
08:01:55.719 DEBUG [ChannelManager.closeChannel] - Closing Channel [id: 0xdb7be0a0, L:/10.169.54.25:60408 - R:/10.169.54.29:5555]
08:01:55.720 DEBUG [AsyncHttpClientHandler.channelInactive] - Channel Closed: [id: 0xdb7be0a0, L:/10.169.54.25:60408 ! R:/10.169.54.29:5555] with attribute DISCARD
```
### Operating System
Windows 10
### Selenium version
Java 4.5.2
### What are the browser(s) and version(s) where you see this issue?
Firefox 106
### What are the browser driver(s) and version(s) where you see this issue?
Geckodriver 0.32
### Are you using Selenium Grid?
4.5.3
|
1.0
|
[🐛 Bug]: - ### What happened?
I'm running a test on the Grid and I always get this Exception in the console.
I paseted the grid hub log in the Relevant log output section.
org.openqa.selenium.remote.http.ConnectionFailedException: Unable to establish websocket connection to http://10.169.54.25:4444/session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp
Build info: version: '4.5.2', revision: '702c64f787c'
System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '11.0.16'
Driver info: driver.version: unknown
at org.openqa.selenium.remote.http.netty.NettyWebSocket.<init>(NettyWebSocket.java:102)
at org.openqa.selenium.remote.http.netty.NettyWebSocket.lambda$create$3(NettyWebSocket.java:128)
at org.openqa.selenium.remote.http.netty.NettyClient.openSocket(NettyClient.java:107)
at org.openqa.selenium.devtools.Connection.<init>(Connection.java:77)
at org.openqa.selenium.devtools.SeleniumCdpConnection.<init>(SeleniumCdpConnection.java:34)
at org.openqa.selenium.devtools.SeleniumCdpConnection.lambda$create$0(SeleniumCdpConnection.java:56)
at java.base/java.util.Optional.map(Optional.java:265)
at org.openqa.selenium.devtools.SeleniumCdpConnection.create(SeleniumCdpConnection.java:54)
at org.openqa.selenium.devtools.SeleniumCdpConnection.create(SeleniumCdpConnection.java:47)
at org.openqa.selenium.devtools.DevToolsProvider.getImplementation(DevToolsProvider.java:50)
at org.openqa.selenium.devtools.DevToolsProvider.getImplementation(DevToolsProvider.java:31)
at org.openqa.selenium.remote.Augmenter.augment(Augmenter.java:186)
at org.openqa.selenium.remote.RemoteWebDriverBuilder.build(RemoteWebDriverBuilder.java:375)
### How can we reproduce the issue?
```shell
Run a test on the Grid with Firefox and Win 10 nodes
```
### Relevant log output
```shell
08:01:54.716 DEBUG [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "cfd62c6db2e414a70bdde42094f0195a","eventTime": 1666684914711825748,"eventName": "Session created by the Distributor","attributes": {"logger": "org.openqa.selenium.grid.distributor.local.LocalDistributor","request.payload": "[Capabilities {acceptInsecureCerts: true, proxy: {httpProxy: 10.163.24.235:443, noProxy: [.jsdelivr.net], proxyType: MANUAL, sslProxy: 10.163.24.235:443}}, Capabilities {acceptInsecureCerts: true, proxy: {httpProxy: 10.163.24.235:443, noProxy: [.jsdelivr.net], proxyType: manual, sslProxy: 10.163.24.235:443}}]","session.capabilities": "{\"acceptInsecureCerts\": true,\"browserName\": \"firefox\",\"browserVersion\": \"106.0.1\",\"moz:accessibilityChecks\": false,\"moz:buildID\": \"20221019185550\",\"moz:geckodriverVersion\": \"0.32.0\",\"moz:headless\": false,\"moz:platformVersion\": \"10.0\",\"moz:processID\": 7088,\"moz:profile\": \"C:\\\\Users\\\\testuser\\\\AppData\\\\Local\\\\Temp\\\\rust_mozprofileJSjQgX\",\"moz:shutdownTimeout\": 60000,\"moz:useNonSpecCompliantPointerOrigin\": false,\"moz:webdriverClick\": true,\"moz:windowless\": false,\"pageLoadStrategy\": \"normal\",\"platformName\": \"WINDOWS\",\"proxy\": {\"httpProxy\": \"10.163.24.235:443\",\"proxyType\": \"MANUAL\",\"noProxy\": [ \".jsdelivr.net\" ], \"sslProxy\": \"10.163.24.235:443\" }, \"se:bidi\": \"ws:\\u002f\\u002f10.169.54.25:4444\\u002fsession\\u002f251e675f-7847-44ea-ba81-873fec335bbb\\u002fse\\u002fbidi\", \"se:cdp\": \"ws:\\u002f\\u002f10.169.54.25:4444\\u002fsession\\u002f251e675f-7847-44ea-ba81-873fec335bbb\\u002fse\\u002fcdp\", \"setWindowRect\": true, \"strictFileInteractability\": false, \"timeouts\": { \"implicit\": 0, \"pageLoad\": 300000, \"script\": 30000 }, \"unhandledPromptBehavior\": \"dismiss and notify\" }\n","session.id": "251e675f-7847-44ea-ba81-873fec335bbb","session.uri": "http:\u002f\u002f10.169.54.29:5555"}}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=d26c94cc620bf820, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=sessionqueue.completed, kind=INTERNAL, startEpochNanos=1666684914716000000, endEpochNanos=1666684914716426302, attributes={}, totalAttributeCount=0, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=c41247787e40b782, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=sessionqueue.add_to_queue, kind=INTERNAL, startEpochNanos=1666684886448000000, endEpochNanos=1666684914716880703, attributes={}, totalAttributeCount=0, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.717 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=1d328f6d4e487b5e, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=true, valid=true}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=distributor.poll_queue, kind=INTERNAL, startEpochNanos=1666684910515000000, endEpochNanos=1666684914716751864, attributes=AttributesMap{data={request.id=aa936248-b3bb-421e-bd81-e02beb4149f5}, capacity=128, totalAddedValues=1}, totalAttributeCount=1, events=[], totalRecordedEvents=0, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=UNSET, description=}, hasEnded=true}
08:01:54.718 DEBUG [SeleniumSpanExporter$1.lambda$export$4] - SpanData{spanContext=ImmutableSpanContext{traceId=cfd62c6db2e414a70bdde42094f0195a, spanId=e15472f43f032e40, traceFlags=01, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}, parentSpanContext=ImmutableSpanContext{traceId=00000000000000000000000000000000, spanId=0000000000000000, traceFlags=00, traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=false}, resource=Resource{schemaUrl=https://opentelemetry.io/schemas/1.13.0, attributes={service.name="unknown_service:java", telemetry.sdk.language="java", telemetry.sdk.name="opentelemetry", telemetry.sdk.version="1.19.0"}}, instrumentationScopeInfo=InstrumentationScopeInfo{name=default, version=null, schemaUrl=null, attributes={}}, name=session_queue, kind=INTERNAL, startEpochNanos=1666684886394000000, endEpochNanos=1666684914717731217, attributes=AttributesMap{data={span.kind=server, http.target=/session, random.key=c6dd9b79-3119-43fc-aab8-57b6fba15e3f, http.method=POST, http.status_code=200}, capacity=128, totalAddedValues=5}, totalAttributeCount=5, events=[ImmutableEventData{name=HTTP request execution complete, attributes={http.flavor=1, http.handler_class="org.openqa.selenium.grid.sessionqueue.local.LocalNewSessionQueue", http.host="10.169.54.25:4444", http.method="POST", http.request_content_length="949", http.scheme="HTTP", http.status_code=200, http.target="/session", http.user_agent="selenium/4.5.2 (java windows)"}, epochNanos=1666684914717719817, totalAttributeCount=9}], totalRecordedEvents=1, links=[], totalRecordedLinks=0, status=ImmutableStatusData{statusCode=OK, description=Kind: OK Description:}, hasEnded=true}
08:01:54.718 DEBUG [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "cfd62c6db2e414a70bdde42094f0195a","eventTime": 1666684914717719817,"eventName": "HTTP request execution complete","attributes": {"http.flavor": 1,"http.handler_class": "org.openqa.selenium.grid.sessionqueue.local.LocalNewSessionQueue","http.host": "10.169.54.25:4444","http.method": "POST","http.request_content_length": "949","http.scheme": "HTTP","http.status_code": 200,"http.target": "\u002fsession","http.user_agent": "selenium\u002f4.5.2 (java windows)"}}
08:01:54.809 DEBUG [DefaultChannelPool$IdleChannelDetector.run] - Entry count for : http://10.169.54.29:5555 : 1
08:01:55.693 DEBUG [LoggingHandler.channelRead] - [id: 0x61ef8f02, L:/0:0:0:0:0:0:0:0:4444] READ: [id: 0x657b5878, L:/10.169.54.25:4444 - R:/10.65.248.248:61601]
08:01:55.694 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x61ef8f02, L:/0:0:0:0:0:0:0:0:4444] READ COMPLETE
08:01:55.706 DEBUG [ThreadLocalRandom.newSeed] - -Dio.netty.initialSeedUniquifier: 0x98f85fa72f486043
08:01:55.710 DEBUG [NettyConnectListener.writeRequest] - Using new Channel '[id: 0xdb7be0a0, L:/10.169.54.25:60408 - R:/10.169.54.29:5555]' for 'GET' to '/session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp'
08:01:55.718 DEBUG [WebSocketHandler.handleRead] -
Request DefaultFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: EmptyByteBufBE)
GET /session/251e675f-7847-44ea-ba81-873fec335bbb/se/cdp HTTP/1.1
upgrade: websocket
connection: upgrade
sec-websocket-key: RdtHrLcuAHGzlSOBjI09MQ==
sec-websocket-version: 13
origin: http://10.169.54.29:5555
host: 10.169.54.29:5555
accept: */*
user-agent: AHC/2.1
Response DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 400 Bad Request
content-length: 15
08:01:55.718 WARN [ProxyWebsocketsIntoGrid$ForwardingListener.onError] - Error proxying websocket command
java.io.IOException: Invalid Status code=400 text=Bad Request
at org.asynchttpclient.netty.handler.WebSocketHandler.abort(WebSocketHandler.java:92)
at org.asynchttpclient.netty.handler.WebSocketHandler.handleRead(WebSocketHandler.java:118)
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
08:01:55.719 DEBUG [ChannelManager.closeChannel] - Closing Channel [id: 0xdb7be0a0, L:/10.169.54.25:60408 - R:/10.169.54.29:5555]
08:01:55.720 DEBUG [AsyncHttpClientHandler.channelInactive] - Channel Closed: [id: 0xdb7be0a0, L:/10.169.54.25:60408 ! R:/10.169.54.29:5555] with attribute DISCARD
```
### Operating System
Windows 10
### Selenium version
Java 4.5.2
### What are the browser(s) and version(s) where you see this issue?
Firefox 106
### What are the browser driver(s) and version(s) where you see this issue?
Geckodriver 0.32
### Are you using Selenium Grid?
4.5.3
|
defect
|
what happened i m running a test on the grid and i always get this exception in the console i paseted the grid hub log in the relevant log output section org openqa selenium remote http connectionfailedexception unable to establish websocket connection to build info version revision system info os name windows os arch os version java version driver info driver version unknown at org openqa selenium remote http netty nettywebsocket nettywebsocket java at org openqa selenium remote http netty nettywebsocket lambda create nettywebsocket java at org openqa selenium remote http netty nettyclient opensocket nettyclient java at org openqa selenium devtools connection connection java at org openqa selenium devtools seleniumcdpconnection seleniumcdpconnection java at org openqa selenium devtools seleniumcdpconnection lambda create seleniumcdpconnection java at java base java util optional map optional java at org openqa selenium devtools seleniumcdpconnection create seleniumcdpconnection java at org openqa selenium devtools seleniumcdpconnection create seleniumcdpconnection java at org openqa selenium devtools devtoolsprovider getimplementation devtoolsprovider java at org openqa selenium devtools devtoolsprovider getimplementation devtoolsprovider java at org openqa selenium remote augmenter augment augmenter java at org openqa selenium remote remotewebdriverbuilder build remotewebdriverbuilder java how can we reproduce the issue shell run a test on the grid with firefox and win nodes relevant log output shell debug traceid eventtime eventname session created by the distributor attributes logger org openqa selenium grid distributor local localdistributor request payload proxytype manual sslproxy capabilities acceptinsecurecerts true proxy httpproxy noproxy proxytype manual sslproxy session capabilities acceptinsecurecerts true browsername firefox browserversion moz accessibilitychecks false moz buildid moz geckodriverversion moz headless false moz platformversion moz processid moz profile c users testuser appdata local temp rust mozprofilejsjqgx moz shutdowntimeout moz usenonspeccompliantpointerorigin false moz webdriverclick true moz windowless false pageloadstrategy normal platformname windows proxy httpproxy proxytype manual noproxy sslproxy se bidi ws se cdp ws setwindowrect true strictfileinteractability false timeouts implicit pageload script unhandledpromptbehavior dismiss and notify n session id session uri http debug spandata spancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote false valid true parentspancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote true valid true resource resource schemaurl attributes service name unknown service java telemetry sdk language java telemetry sdk name opentelemetry telemetry sdk version instrumentationscopeinfo instrumentationscopeinfo name default version null schemaurl null attributes name sessionqueue completed kind internal startepochnanos endepochnanos attributes totalattributecount events totalrecordedevents links totalrecordedlinks status immutablestatusdata statuscode unset description hasended true debug spandata spancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote false valid true parentspancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote true valid true resource resource schemaurl attributes service name unknown service java telemetry sdk language java telemetry sdk name opentelemetry telemetry sdk version instrumentationscopeinfo instrumentationscopeinfo name default version null schemaurl null attributes name sessionqueue add to queue kind internal startepochnanos endepochnanos attributes totalattributecount events totalrecordedevents links totalrecordedlinks status immutablestatusdata statuscode unset description hasended true debug spandata spancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote false valid true parentspancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote true valid true resource resource schemaurl attributes service name unknown service java telemetry sdk language java telemetry sdk name opentelemetry telemetry sdk version instrumentationscopeinfo instrumentationscopeinfo name default version null schemaurl null attributes name distributor poll queue kind internal startepochnanos endepochnanos attributes attributesmap data request id capacity totaladdedvalues totalattributecount events totalrecordedevents links totalrecordedlinks status immutablestatusdata statuscode unset description hasended true debug spandata spancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote false valid true parentspancontext immutablespancontext traceid spanid traceflags tracestate arraybasedtracestate entries remote false valid false resource resource schemaurl attributes service name unknown service java telemetry sdk language java telemetry sdk name opentelemetry telemetry sdk version instrumentationscopeinfo instrumentationscopeinfo name default version null schemaurl null attributes name session queue kind internal startepochnanos endepochnanos attributes attributesmap data span kind server http target session random key http method post http status code capacity totaladdedvalues totalattributecount events totalrecordedevents links totalrecordedlinks status immutablestatusdata statuscode ok description kind ok description hasended true debug traceid eventtime eventname http request execution complete attributes http flavor http handler class org openqa selenium grid sessionqueue local localnewsessionqueue http host http method post http request content length http scheme http http status code http target http user agent selenium java windows debug entry count for debug read debug read complete debug dio netty initialseeduniquifier debug using new channel for get to session se cdp debug request defaultfullhttprequest decoderesult success version http content emptybytebufbe get session se cdp http upgrade websocket connection upgrade sec websocket key sec websocket version origin host accept user agent ahc response defaulthttpresponse decoderesult success version http http bad request content length warn error proxying websocket command java io ioexception invalid status code text bad request at org asynchttpclient netty handler websockethandler abort websockethandler java at org asynchttpclient netty handler websockethandler handleread websockethandler java at org asynchttpclient netty handler asynchttpclienthandler channelread asynchttpclienthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler logging logginghandler channelread logginghandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java base java lang thread run thread java debug closing channel debug channel closed with attribute discard operating system windows selenium version java what are the browser s and version s where you see this issue firefox what are the browser driver s and version s where you see this issue geckodriver are you using selenium grid
| 1
|
6,847
| 2,610,297,689
|
IssuesEvent
|
2015-02-26 19:35:42
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Login Issue
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Register a nickname, then login to Hedgewars online with that nickname.
2.
3.
What is the expected output? What do you see instead?
Two different pop-ups, first saying "Connection to server is lost", second
saying "Quit Reason: Authentication Failed.
What version of the product are you using? On what operating system?
version 0.9.15, Macbook OS X Snowleopard.
Please provide any additional information below.
I know my internet is working, and very well at that. This must be a bug in the
application.
```
-----
Original issue reported on code.google.com by `jblade...@gmail.com` on 1 Feb 2011 at 1:59
* Merged into: #180
|
1.0
|
Login Issue - ```
What steps will reproduce the problem?
1. Register a nickname, then login to Hedgewars online with that nickname.
2.
3.
What is the expected output? What do you see instead?
Two different pop-ups, first saying "Connection to server is lost", second
saying "Quit Reason: Authentication Failed.
What version of the product are you using? On what operating system?
version 0.9.15, Macbook OS X Snowleopard.
Please provide any additional information below.
I know my internet is working, and very well at that. This must be a bug in the
application.
```
-----
Original issue reported on code.google.com by `jblade...@gmail.com` on 1 Feb 2011 at 1:59
* Merged into: #180
|
defect
|
login issue what steps will reproduce the problem register a nickname then login to hedgewars online with that nickname what is the expected output what do you see instead two different pop ups first saying connection to server is lost second saying quit reason authentication failed what version of the product are you using on what operating system version macbook os x snowleopard please provide any additional information below i know my internet is working and very well at that this must be a bug in the application original issue reported on code google com by jblade gmail com on feb at merged into
| 1
|
30,540
| 6,154,060,920
|
IssuesEvent
|
2017-06-28 11:41:26
|
el-mejor/LifeTimeV3
|
https://api.github.com/repos/el-mejor/LifeTimeV3
|
closed
|
change of property in grid that causes other properties to (dis-)appear
|
defect
|
Currently the grid will not be updated to display / hide properties immediately. This has to be forces since the grid updates only when another element was chosen (to improve performance).
|
1.0
|
change of property in grid that causes other properties to (dis-)appear - Currently the grid will not be updated to display / hide properties immediately. This has to be forces since the grid updates only when another element was chosen (to improve performance).
|
defect
|
change of property in grid that causes other properties to dis appear currently the grid will not be updated to display hide properties immediately this has to be forces since the grid updates only when another element was chosen to improve performance
| 1
|
17,218
| 2,984,432,624
|
IssuesEvent
|
2015-07-18 00:51:40
|
google/omaha
|
https://api.github.com/repos/google/omaha
|
closed
|
Google Update Check request error
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.Send Update Check request on the server.
What is the expected output? What do you see instead?
1.Response from the server.
What version of the product are you using? On what operating system?
1. platform="win" version="6.1" sp="Service Pack 1" arch="x64"
Please provide any additional information below.
I'm trying to send a request to the server, but I receive an error report.The
documentation says :
"If the client elects to use SSL, no further integrity checking is needed."
in the attached file, I do request HTTP only to show the body of the request,
and so I use HTTPS. What does such response from the server mean?
```
Original issue reported on code.google.com by `andrey.s...@gmail.com` on 29 Jan 2014 at 1:02
Attachments:
* [checkUpdate.pcapng](https://storage.googleapis.com/google-code-attachments/omaha/issue-62/comment-0/checkUpdate.pcapng)
|
1.0
|
Google Update Check request error - ```
What steps will reproduce the problem?
1.Send Update Check request on the server.
What is the expected output? What do you see instead?
1.Response from the server.
What version of the product are you using? On what operating system?
1. platform="win" version="6.1" sp="Service Pack 1" arch="x64"
Please provide any additional information below.
I'm trying to send a request to the server, but I receive an error report.The
documentation says :
"If the client elects to use SSL, no further integrity checking is needed."
in the attached file, I do request HTTP only to show the body of the request,
and so I use HTTPS. What does such response from the server mean?
```
Original issue reported on code.google.com by `andrey.s...@gmail.com` on 29 Jan 2014 at 1:02
Attachments:
* [checkUpdate.pcapng](https://storage.googleapis.com/google-code-attachments/omaha/issue-62/comment-0/checkUpdate.pcapng)
|
defect
|
google update check request error what steps will reproduce the problem send update check request on the server what is the expected output what do you see instead response from the server what version of the product are you using on what operating system platform win version sp service pack arch please provide any additional information below i m trying to send a request to the server but i receive an error report the documentation says if the client elects to use ssl no further integrity checking is needed in the attached file i do request http only to show the body of the request and so i use https what does such response from the server mean original issue reported on code google com by andrey s gmail com on jan at attachments
| 1
|
14,175
| 2,791,978,337
|
IssuesEvent
|
2015-05-10 16:20:17
|
numpy/numpy
|
https://api.github.com/repos/numpy/numpy
|
reopened
|
Bug in np.random.dirichlet for small alpha parameters
|
component: numpy.random Defect
|
Hi,
I encountered a bug when using np.random.dirichlet with small alpha parameters. Call and traceback are below.
```python
ZeroDivisionError Traceback (most recent call last)
<ipython-input-86-73c2067e20c1> in <module>()
----> 1 np.random.dirichlet([0.0001, 0.0, 0.0001])
mtrand.pyx in mtrand.RandomState.dirichlet (numpy/random/mtrand/mtrand.c:24477)()
mtrand.pyx in mtrand.RandomState.dirichlet (numpy/random/mtrand/mtrand.c:24387)()
ZeroDivisionError: float division
```
I am using numpy-1.9.1.
I believe this is a floating point issue, the distribution has almost all of its mass very close to either (1, 0, 0) or (0, 0, 1) .
The 'float division' error already occurs for larger values<1, e.g. 0.001.
It is likely that this occurs because of the Dirichlet distribution is usually sampled via the Gamma distribution followed by normalization. If all values returned from Gamma sampling are zero than a float division error occurs.
In addition
```python
np.random.beta(0.0001, 0.0001)
```
produces 'nan' most of the time, while it should be alternating 'almost always' between (1, 0) and
(0, 1)
```python scipy.special.betainc(0.0001, 0.0001, 1e-50) = 0.494..```
It might not be able to fix that in the current algorithmic framework but maybe it is possible to discourage/prevent users from supplying too small parameters.
Wow, this wasn't supposed to become such a long post. Thanks to anyone reading/considering this issue.
|
1.0
|
Bug in np.random.dirichlet for small alpha parameters - Hi,
I encountered a bug when using np.random.dirichlet with small alpha parameters. Call and traceback are below.
```python
ZeroDivisionError Traceback (most recent call last)
<ipython-input-86-73c2067e20c1> in <module>()
----> 1 np.random.dirichlet([0.0001, 0.0, 0.0001])
mtrand.pyx in mtrand.RandomState.dirichlet (numpy/random/mtrand/mtrand.c:24477)()
mtrand.pyx in mtrand.RandomState.dirichlet (numpy/random/mtrand/mtrand.c:24387)()
ZeroDivisionError: float division
```
I am using numpy-1.9.1.
I believe this is a floating point issue, the distribution has almost all of its mass very close to either (1, 0, 0) or (0, 0, 1) .
The 'float division' error already occurs for larger values<1, e.g. 0.001.
It is likely that this occurs because of the Dirichlet distribution is usually sampled via the Gamma distribution followed by normalization. If all values returned from Gamma sampling are zero than a float division error occurs.
In addition
```python
np.random.beta(0.0001, 0.0001)
```
produces 'nan' most of the time, while it should be alternating 'almost always' between (1, 0) and
(0, 1)
```python scipy.special.betainc(0.0001, 0.0001, 1e-50) = 0.494..```
It might not be able to fix that in the current algorithmic framework but maybe it is possible to discourage/prevent users from supplying too small parameters.
Wow, this wasn't supposed to become such a long post. Thanks to anyone reading/considering this issue.
|
defect
|
bug in np random dirichlet for small alpha parameters hi i encountered a bug when using np random dirichlet with small alpha parameters call and traceback are below python zerodivisionerror traceback most recent call last in np random dirichlet mtrand pyx in mtrand randomstate dirichlet numpy random mtrand mtrand c mtrand pyx in mtrand randomstate dirichlet numpy random mtrand mtrand c zerodivisionerror float division i am using numpy i believe this is a floating point issue the distribution has almost all of its mass very close to either or the float division error already occurs for larger values e g it is likely that this occurs because of the dirichlet distribution is usually sampled via the gamma distribution followed by normalization if all values returned from gamma sampling are zero than a float division error occurs in addition python np random beta produces nan most of the time while it should be alternating almost always between and python scipy special betainc it might not be able to fix that in the current algorithmic framework but maybe it is possible to discourage prevent users from supplying too small parameters wow this wasn t supposed to become such a long post thanks to anyone reading considering this issue
| 1
|
73,663
| 24,744,917,756
|
IssuesEvent
|
2022-10-21 08:54:30
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
Cannot trigger QR code verification for new session
|
T-Defect A-E2EE A-E2EE-Cross-Signing S-Major O-Occasional
|
### Steps to reproduce
1. Log in to a new session no Android
2. Trigger verification from web toast
3. See the green verification toast on Android (not the grey one)
4. See only option for key/passphrase
5. Trigger verification from Android
6. See only option for key/passphrase
Eventually triggering verification from the settings on web gave me the grey verification toast on Android which let me verify with the QR code
### Outcome
#### What did you expect?
Always be offered QR code scan on Android
#### What happened instead?
Could only verify with key or passphrase for a long time
### Your phone model
Pixel 6a
### Operating system version
latest Graphene OS
### Application version and app store
1.5.2 (playstore), SDK 1.5.2, olm 3.2.12
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
1.0
|
Cannot trigger QR code verification for new session - ### Steps to reproduce
1. Log in to a new session no Android
2. Trigger verification from web toast
3. See the green verification toast on Android (not the grey one)
4. See only option for key/passphrase
5. Trigger verification from Android
6. See only option for key/passphrase
Eventually triggering verification from the settings on web gave me the grey verification toast on Android which let me verify with the QR code
### Outcome
#### What did you expect?
Always be offered QR code scan on Android
#### What happened instead?
Could only verify with key or passphrase for a long time
### Your phone model
Pixel 6a
### Operating system version
latest Graphene OS
### Application version and app store
1.5.2 (playstore), SDK 1.5.2, olm 3.2.12
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
defect
|
cannot trigger qr code verification for new session steps to reproduce log in to a new session no android trigger verification from web toast see the green verification toast on android not the grey one see only option for key passphrase trigger verification from android see only option for key passphrase eventually triggering verification from the settings on web gave me the grey verification toast on android which let me verify with the qr code outcome what did you expect always be offered qr code scan on android what happened instead could only verify with key or passphrase for a long time your phone model pixel operating system version latest graphene os application version and app store playstore sdk olm homeserver matrix org will you send logs no are you willing to provide a pr no
| 1
|
16,595
| 2,919,879,946
|
IssuesEvent
|
2015-06-24 16:10:36
|
CIAT-DAPA/cwr_gap-analysis-cwr
|
https://api.github.com/repos/CIAT-DAPA/cwr_gap-analysis-cwr
|
closed
|
Interactive map- Adzuki bean not displaying chrome for nora
|
auto-migrated Milestone-Release3.0 Priority-High Type-Defect
|
```
Interactive map- Adzuki bean not displaying chrome for nora
```
Original issue reported on code.google.com by `colin.kh...@gmail.com` on 24 Jun 2014 at 4:18
|
1.0
|
Interactive map- Adzuki bean not displaying chrome for nora - ```
Interactive map- Adzuki bean not displaying chrome for nora
```
Original issue reported on code.google.com by `colin.kh...@gmail.com` on 24 Jun 2014 at 4:18
|
defect
|
interactive map adzuki bean not displaying chrome for nora interactive map adzuki bean not displaying chrome for nora original issue reported on code google com by colin kh gmail com on jun at
| 1
|
180,186
| 30,458,729,791
|
IssuesEvent
|
2023-07-17 04:05:37
|
antrea-io/antrea
|
https://api.github.com/repos/antrea-io/antrea
|
opened
|
Refactor test-vm.sh
|
kind/design
|
The function `deliver_antrea_vm` in test-vm.sh should be refactored to similar logic as in test.sh. It should build the Antrea image and the antrea-agent binary first, then deliver to the required destinations. After that, we need to load and copy the image and apply the YAML files.
|
1.0
|
Refactor test-vm.sh - The function `deliver_antrea_vm` in test-vm.sh should be refactored to similar logic as in test.sh. It should build the Antrea image and the antrea-agent binary first, then deliver to the required destinations. After that, we need to load and copy the image and apply the YAML files.
|
non_defect
|
refactor test vm sh the function deliver antrea vm in test vm sh should be refactored to similar logic as in test sh it should build the antrea image and the antrea agent binary first then deliver to the required destinations after that we need to load and copy the image and apply the yaml files
| 0
|
363
| 3,228,469,566
|
IssuesEvent
|
2015-10-12 02:36:51
|
neuravion/mesh-chat
|
https://api.github.com/repos/neuravion/mesh-chat
|
closed
|
Document the node adding process
|
architecture and design documentation
|
Similar to WASTE
You gotta know somebody
But maybe there could be a public and private network, where the public network allows anyone to join, and the private network requires you to know somebody.
|
1.0
|
Document the node adding process - Similar to WASTE
You gotta know somebody
But maybe there could be a public and private network, where the public network allows anyone to join, and the private network requires you to know somebody.
|
non_defect
|
document the node adding process similar to waste you gotta know somebody but maybe there could be a public and private network where the public network allows anyone to join and the private network requires you to know somebody
| 0
|
113,775
| 24,485,822,808
|
IssuesEvent
|
2022-10-09 12:22:11
|
Swarm-Creative/project-24
|
https://api.github.com/repos/Swarm-Creative/project-24
|
opened
|
Multiplier
|
gameplay code ui-ux effects
|
Create a multiplier system to multiply points as players take specific actions. Aerial kill, triple kill, environment kill
### REQS
- visual feedback hook
- sfx
- ranks
|
1.0
|
Multiplier - Create a multiplier system to multiply points as players take specific actions. Aerial kill, triple kill, environment kill
### REQS
- visual feedback hook
- sfx
- ranks
|
non_defect
|
multiplier create a multiplier system to multiply points as players take specific actions aerial kill triple kill environment kill reqs visual feedback hook sfx ranks
| 0
|
51,346
| 13,635,101,392
|
IssuesEvent
|
2020-09-25 01:53:54
|
nasifimtiazohi/openmrs-module-referenceapplication-2.10.0
|
https://api.github.com/repos/nasifimtiazohi/openmrs-module-referenceapplication-2.10.0
|
opened
|
CVE-2016-7954 (High) detected in bundler-1.1.4.gem
|
security vulnerability
|
## CVE-2016-7954 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bundler-1.1.4.gem</b></p></summary>
<p>Bundler manages an application's dependencies through its entire life, across many machines, systematically and repeatably</p>
<p>Library home page: <a href="https://rubygems.org/gems/bundler-1.1.4.gem">https://rubygems.org/gems/bundler-1.1.4.gem</a></p>
<p>Path to vulnerable library: gem</p>
<p>
Dependency Hierarchy:
- :x: **bundler-1.1.4.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-referenceapplication-2.10.0/commit/70307a60dc7ec72f4be4d0e10f0f685c3fa95840">70307a60dc7ec72f4be4d0e10f0f685c3fa95840</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bundler 1.x might allow remote attackers to inject arbitrary Ruby code into an application by leveraging a gem name collision on a secondary source. NOTE: this might overlap CVE-2013-0334.
<p>Publish Date: 2016-12-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7954>CVE-2016-7954</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://collectiveidea.com/blog/archives/2016/10/06/bundlers-multiple-source-security-vulnerability">https://collectiveidea.com/blog/archives/2016/10/06/bundlers-multiple-source-security-vulnerability</a></p>
<p>Release Date: 2016-12-22</p>
<p>Fix Resolution: 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-7954 (High) detected in bundler-1.1.4.gem - ## CVE-2016-7954 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bundler-1.1.4.gem</b></p></summary>
<p>Bundler manages an application's dependencies through its entire life, across many machines, systematically and repeatably</p>
<p>Library home page: <a href="https://rubygems.org/gems/bundler-1.1.4.gem">https://rubygems.org/gems/bundler-1.1.4.gem</a></p>
<p>Path to vulnerable library: gem</p>
<p>
Dependency Hierarchy:
- :x: **bundler-1.1.4.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-referenceapplication-2.10.0/commit/70307a60dc7ec72f4be4d0e10f0f685c3fa95840">70307a60dc7ec72f4be4d0e10f0f685c3fa95840</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bundler 1.x might allow remote attackers to inject arbitrary Ruby code into an application by leveraging a gem name collision on a secondary source. NOTE: this might overlap CVE-2013-0334.
<p>Publish Date: 2016-12-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7954>CVE-2016-7954</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://collectiveidea.com/blog/archives/2016/10/06/bundlers-multiple-source-security-vulnerability">https://collectiveidea.com/blog/archives/2016/10/06/bundlers-multiple-source-security-vulnerability</a></p>
<p>Release Date: 2016-12-22</p>
<p>Fix Resolution: 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in bundler gem cve high severity vulnerability vulnerable library bundler gem bundler manages an application s dependencies through its entire life across many machines systematically and repeatably library home page a href path to vulnerable library gem dependency hierarchy x bundler gem vulnerable library found in head commit a href found in base branch master vulnerability details bundler x might allow remote attackers to inject arbitrary ruby code into an application by leveraging a gem name collision on a secondary source note this might overlap cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,564
| 2,610,405,589
|
IssuesEvent
|
2015-02-26 20:11:50
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Clone Medic
|
auto-migrated Priority-Medium Type-Defect
|
```
A user claimed that the clone medics aren't doing any actual healing.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 18 Jul 2011 at 1:05
|
1.0
|
Clone Medic - ```
A user claimed that the clone medics aren't doing any actual healing.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 18 Jul 2011 at 1:05
|
defect
|
clone medic a user claimed that the clone medics aren t doing any actual healing original issue reported on code google com by killerhurdz netscape net on jul at
| 1
|
75,487
| 25,870,137,993
|
IssuesEvent
|
2022-12-14 01:35:08
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Stray debug printf on FreeBSD
|
Type: Defect
|
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | FreeBSD
Distribution Version | 13.1-RELEASE-p5
Kernel Version | 13.1-RELEASE-p3
Architecture | amd64
OpenZFS Version | zfs-2.1.4-FreeBSD_g52bad4f23
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
While working on porting podman and buildah to FreeBSD, I have noticed many messages logged on the console that look like this:
```
len 4 vecnum: 126 sizeof (zfs_cmd_t) 4528
```
After debugging this, I discovered that this happens when something tries to set non-blocking i/o on a ```/dev/zfs``` filedescriptor which translates to a ```FIONBIO``` ioctl.
The zfs storage layer in for buildah and podman tries to detect whether zfs is available by opening /dev/zfs. The golang runtime libraries unconditionally try to set the descriptor into non-blocking mode causing the error message.
I can work around this in [containers/storage](https://github.com/containers/storage) but it seems to me that the FreeBSD ZFS port should not print this message to console but instead return a suitable error. Currently it returns ```EINVAL``` after the printf which seems reasonable.
### Describe how to reproduce the problem
On FreeBSD-13.1 or later, install the buildah package and run:
```
# buildah from quay.io/dougrabson/freebsd-minimal:13
```
On the system console, the ```len 4 vecnum...``` message quoted above appears.
### Include any warning/errors/backtraces from the system logs
```
len 4 vecnum: 126 sizeof (zfs_cmd_t) 4528
```
|
1.0
|
Stray debug printf on FreeBSD - ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | FreeBSD
Distribution Version | 13.1-RELEASE-p5
Kernel Version | 13.1-RELEASE-p3
Architecture | amd64
OpenZFS Version | zfs-2.1.4-FreeBSD_g52bad4f23
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
While working on porting podman and buildah to FreeBSD, I have noticed many messages logged on the console that look like this:
```
len 4 vecnum: 126 sizeof (zfs_cmd_t) 4528
```
After debugging this, I discovered that this happens when something tries to set non-blocking i/o on a ```/dev/zfs``` filedescriptor which translates to a ```FIONBIO``` ioctl.
The zfs storage layer in for buildah and podman tries to detect whether zfs is available by opening /dev/zfs. The golang runtime libraries unconditionally try to set the descriptor into non-blocking mode causing the error message.
I can work around this in [containers/storage](https://github.com/containers/storage) but it seems to me that the FreeBSD ZFS port should not print this message to console but instead return a suitable error. Currently it returns ```EINVAL``` after the printf which seems reasonable.
### Describe how to reproduce the problem
On FreeBSD-13.1 or later, install the buildah package and run:
```
# buildah from quay.io/dougrabson/freebsd-minimal:13
```
On the system console, the ```len 4 vecnum...``` message quoted above appears.
### Include any warning/errors/backtraces from the system logs
```
len 4 vecnum: 126 sizeof (zfs_cmd_t) 4528
```
|
defect
|
stray debug printf on freebsd system information type version name distribution name freebsd distribution version release kernel version release architecture openzfs version zfs freebsd command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing while working on porting podman and buildah to freebsd i have noticed many messages logged on the console that look like this len vecnum sizeof zfs cmd t after debugging this i discovered that this happens when something tries to set non blocking i o on a dev zfs filedescriptor which translates to a fionbio ioctl the zfs storage layer in for buildah and podman tries to detect whether zfs is available by opening dev zfs the golang runtime libraries unconditionally try to set the descriptor into non blocking mode causing the error message i can work around this in but it seems to me that the freebsd zfs port should not print this message to console but instead return a suitable error currently it returns einval after the printf which seems reasonable describe how to reproduce the problem on freebsd or later install the buildah package and run buildah from quay io dougrabson freebsd minimal on the system console the len vecnum message quoted above appears include any warning errors backtraces from the system logs len vecnum sizeof zfs cmd t
| 1
|
27,326
| 4,965,459,266
|
IssuesEvent
|
2016-12-04 09:42:39
|
otros-systems/otroslogviewer
|
https://api.github.com/repos/otros-systems/otroslogviewer
|
closed
|
Empty columns
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Import a log file with no Class / Method information
2.
What is the expected output? What do you see instead?
What is the expected output : Not display empty columns
What do you see instead : Columns with "?"
What version of the product are you using? On what operating system?
2011-10-14 / Win XP SP3
Please provide any additional information below.
```
Original issue reported on code.google.com by `Renan.BE...@gmail.com` on 19 Oct 2011 at 9:01
Attachments:
- [2011-10-19_105857.jpg](https://storage.googleapis.com/google-code-attachments/otroslogviewer/issue-124/comment-0/2011-10-19_105857.jpg)
|
1.0
|
Empty columns - ```
What steps will reproduce the problem?
1. Import a log file with no Class / Method information
2.
What is the expected output? What do you see instead?
What is the expected output : Not display empty columns
What do you see instead : Columns with "?"
What version of the product are you using? On what operating system?
2011-10-14 / Win XP SP3
Please provide any additional information below.
```
Original issue reported on code.google.com by `Renan.BE...@gmail.com` on 19 Oct 2011 at 9:01
Attachments:
- [2011-10-19_105857.jpg](https://storage.googleapis.com/google-code-attachments/otroslogviewer/issue-124/comment-0/2011-10-19_105857.jpg)
|
defect
|
empty columns what steps will reproduce the problem import a log file with no class method information what is the expected output what do you see instead what is the expected output not display empty columns what do you see instead columns with what version of the product are you using on what operating system win xp please provide any additional information below original issue reported on code google com by renan be gmail com on oct at attachments
| 1
|
20,309
| 3,332,715,515
|
IssuesEvent
|
2015-11-11 21:25:06
|
jansorg/BashSupport
|
https://api.github.com/repos/jansorg/BashSupport
|
closed
|
Empty tooltip on hover of function
|
auto-migrated Priority-Medium Type-Defect
|
```
See title.
```
Original issue reported on code.google.com by `wallaby....@googlemail.com` on 23 Nov 2012 at 6:37
|
1.0
|
Empty tooltip on hover of function - ```
See title.
```
Original issue reported on code.google.com by `wallaby....@googlemail.com` on 23 Nov 2012 at 6:37
|
defect
|
empty tooltip on hover of function see title original issue reported on code google com by wallaby googlemail com on nov at
| 1
|
140,840
| 11,363,764,433
|
IssuesEvent
|
2020-01-27 05:47:02
|
DynamoRIO/dynamorio
|
https://api.github.com/repos/DynamoRIO/dynamorio
|
closed
|
Multiple points of failure in drsyms-test with VS2017
|
Component-Tests OpSys-Windows
|
We're trying to move to VS2017 but it is causing multiple failures in drsyms-test. I wanted to document them separately from #2924.
name_available is strangely 4 bytes too large:
```
158: name=|_wctype| sz=7 vs avail=7
158: name=|TrailingUpVec| sz=13 vs avail=13
158: name=|TrailingUpVec| sz=13 vs avail=13
158: name=|__acrt_multibyte_initializer| sz=28 vs avail=28
158: name=|__acrt_locale_changed_data| sz=26 vs avail=26
158: name=|Two52| sz=5 vs avail=5
158: name=|parse_command_line<>| sz=20 vs avail=24 parse_command_line<>_da
```
Then we have lots of type mismatches:
```
158: mismatch: |uninitialize_global_state_isolation| id=1201 type=3,1084 prev=|_isatty|
158: mismatch: |uninitialize_c| id=1287 type=3,1084 prev=|__crt_strtox::parse_integer<>|
158: mismatch: |report_memory_leaks| id=1201 type=3,1084 prev=|page_size|
158: mismatch: |free_environment<>| id=1677 type=3,1499 prev=|__security_check_cookie|
158: mismatch: |`<>::operator()'::`2'::c_exit_complete| id=0 type=1,465 prev=|_imp__ExitProcess|
158: mismatch: |`__local_stdio_scanf_options'::`2'::_OptionsStorage| id=0 type=1,3989 prev=|_NULL_IMPORT_DESCRIPTOR|
```
And finally:
```
158: symbol had wrong mangling:
158: expected: dll_export
158: actual: _dll_export
```
Wait, one more: compound arg `anonymous-namespace'::Foo and ::HasFields at
the top of the expected output are in the reverse order.
The last one is understandable and is our test being too rigid, but the others all seem like bugs or flakiness or at the least undesirable behavior in dbghelp.dll. I'm using '/c/Program Files (x86)/Microsoft Visual Studio/2017/Professional/Common7/IDE/Remote Debugger/x86/dbghelp.dll'.
|
1.0
|
Multiple points of failure in drsyms-test with VS2017 - We're trying to move to VS2017 but it is causing multiple failures in drsyms-test. I wanted to document them separately from #2924.
name_available is strangely 4 bytes too large:
```
158: name=|_wctype| sz=7 vs avail=7
158: name=|TrailingUpVec| sz=13 vs avail=13
158: name=|TrailingUpVec| sz=13 vs avail=13
158: name=|__acrt_multibyte_initializer| sz=28 vs avail=28
158: name=|__acrt_locale_changed_data| sz=26 vs avail=26
158: name=|Two52| sz=5 vs avail=5
158: name=|parse_command_line<>| sz=20 vs avail=24 parse_command_line<>_da
```
Then we have lots of type mismatches:
```
158: mismatch: |uninitialize_global_state_isolation| id=1201 type=3,1084 prev=|_isatty|
158: mismatch: |uninitialize_c| id=1287 type=3,1084 prev=|__crt_strtox::parse_integer<>|
158: mismatch: |report_memory_leaks| id=1201 type=3,1084 prev=|page_size|
158: mismatch: |free_environment<>| id=1677 type=3,1499 prev=|__security_check_cookie|
158: mismatch: |`<>::operator()'::`2'::c_exit_complete| id=0 type=1,465 prev=|_imp__ExitProcess|
158: mismatch: |`__local_stdio_scanf_options'::`2'::_OptionsStorage| id=0 type=1,3989 prev=|_NULL_IMPORT_DESCRIPTOR|
```
And finally:
```
158: symbol had wrong mangling:
158: expected: dll_export
158: actual: _dll_export
```
Wait, one more: compound arg `anonymous-namespace'::Foo and ::HasFields at
the top of the expected output are in the reverse order.
The last one is understandable and is our test being too rigid, but the others all seem like bugs or flakiness or at the least undesirable behavior in dbghelp.dll. I'm using '/c/Program Files (x86)/Microsoft Visual Studio/2017/Professional/Common7/IDE/Remote Debugger/x86/dbghelp.dll'.
|
non_defect
|
multiple points of failure in drsyms test with we re trying to move to but it is causing multiple failures in drsyms test i wanted to document them separately from name available is strangely bytes too large name wctype sz vs avail name trailingupvec sz vs avail name trailingupvec sz vs avail name acrt multibyte initializer sz vs avail name acrt locale changed data sz vs avail name sz vs avail name parse command line sz vs avail parse command line da then we have lots of type mismatches mismatch uninitialize global state isolation id type prev isatty mismatch uninitialize c id type prev crt strtox parse integer mismatch report memory leaks id type prev page size mismatch free environment id type prev security check cookie mismatch operator c exit complete id type prev imp exitprocess mismatch local stdio scanf options optionsstorage id type prev null import descriptor and finally symbol had wrong mangling expected dll export actual dll export wait one more compound arg anonymous namespace foo and hasfields at the top of the expected output are in the reverse order the last one is understandable and is our test being too rigid but the others all seem like bugs or flakiness or at the least undesirable behavior in dbghelp dll i m using c program files microsoft visual studio professional ide remote debugger dbghelp dll
| 0
|
14,430
| 2,811,809,905
|
IssuesEvent
|
2015-05-18 01:41:00
|
RenatoUtsch/nulldc
|
https://api.github.com/repos/RenatoUtsch/nulldc
|
closed
|
Enter one-line summary
|
auto-migrated Priority-Medium Restrict-AddIssueComment-Commit Type-Defect
|
```
What steps will reproduce the problem?
1: trying to run a game
2:
3:
What is the expected output? What do you see instead?
to have the game running, instead of crashing with an error
"nulldc_win32_release-notrace.exe has stopped working" or somewhere along that
line anyway...
The expected output is:
having nulldc working properly and run the game
What version of the product are you using? What build? What plugins?
I use version: 1.04
I use a nullDC_Win32_Release-NoTrace build
I use the following plugins:
PowerVR Plugin: *Edit me*
GDRom Plugin: *Edit me*
AICA Plugin: *Edit me*
ARM7 Plugin: *Edit me*
Maple Plugin(s): *Edit me*
Ext.Device Plugin: *Edit me*
On what kind of system?
Windows 7 32bit using virtualbox from a mac
My system specifications are as follows:
Operating System: win 7 32bit from mac as host for virtualbox
CPU:
Video Card:
Sound Card:
Additional related hardware and/or software:
Please provide any additional information below:
```
Original issue reported on code.google.com by `kurniawa...@gmail.com` on 10 Jan 2014 at 3:10
|
1.0
|
Enter one-line summary - ```
What steps will reproduce the problem?
1: trying to run a game
2:
3:
What is the expected output? What do you see instead?
to have the game running, instead of crashing with an error
"nulldc_win32_release-notrace.exe has stopped working" or somewhere along that
line anyway...
The expected output is:
having nulldc working properly and run the game
What version of the product are you using? What build? What plugins?
I use version: 1.04
I use a nullDC_Win32_Release-NoTrace build
I use the following plugins:
PowerVR Plugin: *Edit me*
GDRom Plugin: *Edit me*
AICA Plugin: *Edit me*
ARM7 Plugin: *Edit me*
Maple Plugin(s): *Edit me*
Ext.Device Plugin: *Edit me*
On what kind of system?
Windows 7 32bit using virtualbox from a mac
My system specifications are as follows:
Operating System: win 7 32bit from mac as host for virtualbox
CPU:
Video Card:
Sound Card:
Additional related hardware and/or software:
Please provide any additional information below:
```
Original issue reported on code.google.com by `kurniawa...@gmail.com` on 10 Jan 2014 at 3:10
|
defect
|
enter one line summary what steps will reproduce the problem trying to run a game what is the expected output what do you see instead to have the game running instead of crashing with an error nulldc release notrace exe has stopped working or somewhere along that line anyway the expected output is having nulldc working properly and run the game what version of the product are you using what build what plugins i use version i use a nulldc release notrace build i use the following plugins powervr plugin edit me gdrom plugin edit me aica plugin edit me plugin edit me maple plugin s edit me ext device plugin edit me on what kind of system windows using virtualbox from a mac my system specifications are as follows operating system win from mac as host for virtualbox cpu video card sound card additional related hardware and or software please provide any additional information below original issue reported on code google com by kurniawa gmail com on jan at
| 1
|
148,799
| 11,865,401,890
|
IssuesEvent
|
2020-03-26 00:16:29
|
rapidsai/cudf
|
https://api.github.com/repos/rapidsai/cudf
|
closed
|
Initialize RMM once per test program instead of once per test suite
|
libcudf (C++/CUDA) proposal tests
|
RMM is currently initialized and finalized in every test case. Enabling pool mode prohibitively increases the test execution time in this setup.
Proposal: move the RMM initialization to a test environment object and instantiate it in every test program. With this, RMM pool mode can be expected to improve execution time.
|
1.0
|
Initialize RMM once per test program instead of once per test suite - RMM is currently initialized and finalized in every test case. Enabling pool mode prohibitively increases the test execution time in this setup.
Proposal: move the RMM initialization to a test environment object and instantiate it in every test program. With this, RMM pool mode can be expected to improve execution time.
|
non_defect
|
initialize rmm once per test program instead of once per test suite rmm is currently initialized and finalized in every test case enabling pool mode prohibitively increases the test execution time in this setup proposal move the rmm initialization to a test environment object and instantiate it in every test program with this rmm pool mode can be expected to improve execution time
| 0
|
378,676
| 11,206,325,825
|
IssuesEvent
|
2020-01-05 20:33:26
|
RaenonX/Jelly-Bot
|
https://api.github.com/repos/RaenonX/Jelly-Bot
|
opened
|
Google email address update on changed in the database
|
mark-working priority-9 type-optimize
|
According to [this blog post](https://dev.to/penelope_zone/changing-your-name-is-a-hard-unsolved-problem-in-computer-science-kjf), Google email address is changeable, hence the email address stored in the database of the application also needs to be updated (at least checked) every time to prevent any loss or unwanted behavior when the user changed their email address.
|
1.0
|
Google email address update on changed in the database - According to [this blog post](https://dev.to/penelope_zone/changing-your-name-is-a-hard-unsolved-problem-in-computer-science-kjf), Google email address is changeable, hence the email address stored in the database of the application also needs to be updated (at least checked) every time to prevent any loss or unwanted behavior when the user changed their email address.
|
non_defect
|
google email address update on changed in the database according to google email address is changeable hence the email address stored in the database of the application also needs to be updated at least checked every time to prevent any loss or unwanted behavior when the user changed their email address
| 0
|
45,547
| 12,839,491,977
|
IssuesEvent
|
2020-07-07 19:23:49
|
cython/cython
|
https://api.github.com/repos/cython/cython
|
closed
|
ValueError continuous memory view and 0-size numpy array
|
Buffers defect
|
A ValueError is raised if one gives a numpy array with shape `(2, 0, 1)` to such Cython functions with a 3d memoryview as argument:
```cython
cimport numpy as np
import numpy as np
np.import_array()
cpdef myfunc3d(np.float64_t[:, :, ::1] arr):
if arr.size > 0:
print(arr[0, 0, 0])
else:
print('size == 0')
```
The code to get the error:
```python
import numpy as np
arr = np.ones((2, 0, 1))
print(arr.flags)
myfunc3d(arr)
```
which gives:
```
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-81a397764d36> in <module>()
1 arr = np.ones((2, 0, 1))
2 print(arr.flags)
----> 3 myfunc3d(arr)
_cython_magic_5dfab2d7f2a0e837d79c56566c9147b4.pyx in _cython_magic_5dfab2d7f2a0e837d79c56566c9147b4.myfunc3d()
ValueError: Buffer and memoryview are not contiguous in the same dimension.
In [ ]:
```
Strangely, no error is raised for a shape `(1, 0, 1)`!
|
1.0
|
ValueError continuous memory view and 0-size numpy array - A ValueError is raised if one gives a numpy array with shape `(2, 0, 1)` to such Cython functions with a 3d memoryview as argument:
```cython
cimport numpy as np
import numpy as np
np.import_array()
cpdef myfunc3d(np.float64_t[:, :, ::1] arr):
if arr.size > 0:
print(arr[0, 0, 0])
else:
print('size == 0')
```
The code to get the error:
```python
import numpy as np
arr = np.ones((2, 0, 1))
print(arr.flags)
myfunc3d(arr)
```
which gives:
```
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-81a397764d36> in <module>()
1 arr = np.ones((2, 0, 1))
2 print(arr.flags)
----> 3 myfunc3d(arr)
_cython_magic_5dfab2d7f2a0e837d79c56566c9147b4.pyx in _cython_magic_5dfab2d7f2a0e837d79c56566c9147b4.myfunc3d()
ValueError: Buffer and memoryview are not contiguous in the same dimension.
In [ ]:
```
Strangely, no error is raised for a shape `(1, 0, 1)`!
|
defect
|
valueerror continuous memory view and size numpy array a valueerror is raised if one gives a numpy array with shape to such cython functions with a memoryview as argument cython cimport numpy as np import numpy as np np import array cpdef np t arr if arr size print arr else print size the code to get the error python import numpy as np arr np ones print arr flags arr which gives c contiguous true f contiguous true owndata true writeable true aligned true updateifcopy false valueerror traceback most recent call last in arr np ones print arr flags arr cython magic pyx in cython magic valueerror buffer and memoryview are not contiguous in the same dimension in strangely no error is raised for a shape
| 1
|
69,550
| 22,475,251,623
|
IssuesEvent
|
2022-06-22 11:43:45
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
closed
|
Message bubbles: Clicking on an avatar does nothing
|
T-Defect S-Minor O-Occasional A-Message-Bubbles Z-Labs
|
### Steps to reproduce
1. Enable message bubbles
2. Go to any room with messages from other people
3. Click on an avatar next to a message
### Outcome
#### What did you expect?
That I get to the "profile" of that person.
#### What happened instead?
Nothing
### Your phone model
iPhone 8
### Operating system version
iOS 15.3
### Application version
Element 1.8.0
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
1.0
|
Message bubbles: Clicking on an avatar does nothing - ### Steps to reproduce
1. Enable message bubbles
2. Go to any room with messages from other people
3. Click on an avatar next to a message
### Outcome
#### What did you expect?
That I get to the "profile" of that person.
#### What happened instead?
Nothing
### Your phone model
iPhone 8
### Operating system version
iOS 15.3
### Application version
Element 1.8.0
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
defect
|
message bubbles clicking on an avatar does nothing steps to reproduce enable message bubbles go to any room with messages from other people click on an avatar next to a message outcome what did you expect that i get to the profile of that person what happened instead nothing your phone model iphone operating system version ios application version element homeserver synapse will you send logs no
| 1
|
34,926
| 7,472,024,513
|
IssuesEvent
|
2018-04-03 11:14:27
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
closed
|
test/periph_i2c: cannot use init_master command for nrf5x_common and EFM32 families
|
bug quality defect tests
|
<!--
If your issue is a usage question, please submit it to the user mailing-list
users@riot-os.org or to the developer mailing-list devel@riot-os.org.
If your issue is related to security, please submit it to the security
mailing-list security@riot-os.org.
-->
#### Description
Cannot use `init_master` command from `test/periph_i2c` on NRF52 based board.
#### Steps to reproduce the issue
call `init_master 0 1` (0 is first I2C bus available and 1 stand for `I2C_SPEED_NORMAL` so 100kbits/s)
#### Expected results
I2C is properly initialized.
#### Actual results
```
init_master 0 1
Error: Init: Unsupported speed value
```
#### Versions
current RIOT master
Current I2C implementation from NRF52 supports `I2C_NORMAL_SPEED` and `I2C_FAST_SPEED`.
From what I understand, nrf52 defines its own i2c_speed_t enum
```
typedef enum {
I2C_SPEED_LOW = 0xff, /**< not supported */
I2C_SPEED_NORMAL = TWIM_FREQUENCY_FREQUENCY_K100, /**< 100kbit/s */
I2C_SPEED_FAST = TWIM_FREQUENCY_FREQUENCY_K400, /**< 400kbit/s */
I2C_SPEED_FAST_PLUS = 0xfe, /**< not supported */
I2C_SPEED_HIGH = 0xfd, /**< not supported */
} i2c_speed_t;
```
When we use ` test/periph_i2c`, this app expects 1 for `I2C_SPEED_NORMAL`, 2 for` I2C_SPEED_FAST` etc.
But since nrf52 has its own enum with custom value in order to ease the init in the driver I guess, this test app failed.
As far as I can see, we can either change the behaviour of the `test/periph_i2c` or rewrite the driver.
Additional notes :
This issue may also exists for efm32 and nrf51 family since they define their own `i2c_speed_t`
|
1.0
|
test/periph_i2c: cannot use init_master command for nrf5x_common and EFM32 families - <!--
If your issue is a usage question, please submit it to the user mailing-list
users@riot-os.org or to the developer mailing-list devel@riot-os.org.
If your issue is related to security, please submit it to the security
mailing-list security@riot-os.org.
-->
#### Description
Cannot use `init_master` command from `test/periph_i2c` on NRF52 based board.
#### Steps to reproduce the issue
call `init_master 0 1` (0 is first I2C bus available and 1 stand for `I2C_SPEED_NORMAL` so 100kbits/s)
#### Expected results
I2C is properly initialized.
#### Actual results
```
init_master 0 1
Error: Init: Unsupported speed value
```
#### Versions
current RIOT master
Current I2C implementation from NRF52 supports `I2C_NORMAL_SPEED` and `I2C_FAST_SPEED`.
From what I understand, nrf52 defines its own i2c_speed_t enum
```
typedef enum {
I2C_SPEED_LOW = 0xff, /**< not supported */
I2C_SPEED_NORMAL = TWIM_FREQUENCY_FREQUENCY_K100, /**< 100kbit/s */
I2C_SPEED_FAST = TWIM_FREQUENCY_FREQUENCY_K400, /**< 400kbit/s */
I2C_SPEED_FAST_PLUS = 0xfe, /**< not supported */
I2C_SPEED_HIGH = 0xfd, /**< not supported */
} i2c_speed_t;
```
When we use ` test/periph_i2c`, this app expects 1 for `I2C_SPEED_NORMAL`, 2 for` I2C_SPEED_FAST` etc.
But since nrf52 has its own enum with custom value in order to ease the init in the driver I guess, this test app failed.
As far as I can see, we can either change the behaviour of the `test/periph_i2c` or rewrite the driver.
Additional notes :
This issue may also exists for efm32 and nrf51 family since they define their own `i2c_speed_t`
|
defect
|
test periph cannot use init master command for common and families if your issue is a usage question please submit it to the user mailing list users riot os org or to the developer mailing list devel riot os org if your issue is related to security please submit it to the security mailing list security riot os org description cannot use init master command from test periph on based board steps to reproduce the issue call init master is first bus available and stand for speed normal so s expected results is properly initialized actual results init master error init unsupported speed value versions current riot master current implementation from supports normal speed and fast speed from what i understand defines its own speed t enum typedef enum speed low not supported speed normal twim frequency frequency s speed fast twim frequency frequency s speed fast plus not supported speed high not supported speed t when we use test periph this app expects for speed normal for speed fast etc but since has its own enum with custom value in order to ease the init in the driver i guess this test app failed as far as i can see we can either change the behaviour of the test periph or rewrite the driver additional notes this issue may also exists for and family since they define their own speed t
| 1
|
176,289
| 6,558,127,506
|
IssuesEvent
|
2017-09-06 20:08:04
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Make background of content surrounding area gray
|
bug Priority: Urgent
|
Please make the background of the content surrounding area gray similar to previous implementation.


|
1.0
|
Make background of content surrounding area gray - Please make the background of the content surrounding area gray similar to previous implementation.


|
non_defect
|
make background of content surrounding area gray please make the background of the content surrounding area gray similar to previous implementation
| 0
|
6,166
| 4,164,770,148
|
IssuesEvent
|
2016-06-19 01:55:01
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Godot Editor Very Slow to Close
|
bug confirmed topic:editor usability
|
When I close the editor it takes an absurdly long time to shut itself down, the really annoying bit is that when this is going on you can't interact with the editor anymore meaning it's completely unresponsive while shutting down, which gives a very unsettling feeling to Godot's shutdown sequence as it makes me worry that Godot is about the crash or something similar.
|
True
|
Godot Editor Very Slow to Close - When I close the editor it takes an absurdly long time to shut itself down, the really annoying bit is that when this is going on you can't interact with the editor anymore meaning it's completely unresponsive while shutting down, which gives a very unsettling feeling to Godot's shutdown sequence as it makes me worry that Godot is about the crash or something similar.
|
non_defect
|
godot editor very slow to close when i close the editor it takes an absurdly long time to shut itself down the really annoying bit is that when this is going on you can t interact with the editor anymore meaning it s completely unresponsive while shutting down which gives a very unsettling feeling to godot s shutdown sequence as it makes me worry that godot is about the crash or something similar
| 0
|
121,013
| 4,803,798,017
|
IssuesEvent
|
2016-11-02 11:20:39
|
onaio/onadata
|
https://api.github.com/repos/onaio/onadata
|
closed
|
TypeError: must be string, not float, export error
|
Error Module: Exports Priority: High Size: Small (≤1)
|
```python
Exception in request: TypeError: must be string, not floatTraceback (most recent call last):
File "/.../onadata/apps/viewer/tasks.py", line 115, in create_xls_export
options
File "/.../onadata/libs/utils/export_tools.py", line 175, in generate_export
options=options, columns_with_hxl=columns_with_hxl
File "/.../onadata/libs/utils/export_builder.py", line 671, in to_xls_export
self.pre_process_row(row, section),
File "/.../onadata/libs/utils/export_builder.py", line 470, in pre_process_row
value, elm['type'])
File "/.../onadata/libs/utils/export_builder.py", line 437, in convert_type
return func(value)
File "/.../onadata/libs/utils/export_builder.py", line 190, in <lambda>
'date': lambda x: ExportBuilder.string_to_date_with_xls_validation(x),
File "/.../onadata/libs/utils/export_builder.py", line 201, in string_to_date_with_xls_validation
date_obj = datetime.strptime(date_str, '%Y-%m-%d').date()
TypeError: must be string, not float
```
|
1.0
|
TypeError: must be string, not float, export error - ```python
Exception in request: TypeError: must be string, not floatTraceback (most recent call last):
File "/.../onadata/apps/viewer/tasks.py", line 115, in create_xls_export
options
File "/.../onadata/libs/utils/export_tools.py", line 175, in generate_export
options=options, columns_with_hxl=columns_with_hxl
File "/.../onadata/libs/utils/export_builder.py", line 671, in to_xls_export
self.pre_process_row(row, section),
File "/.../onadata/libs/utils/export_builder.py", line 470, in pre_process_row
value, elm['type'])
File "/.../onadata/libs/utils/export_builder.py", line 437, in convert_type
return func(value)
File "/.../onadata/libs/utils/export_builder.py", line 190, in <lambda>
'date': lambda x: ExportBuilder.string_to_date_with_xls_validation(x),
File "/.../onadata/libs/utils/export_builder.py", line 201, in string_to_date_with_xls_validation
date_obj = datetime.strptime(date_str, '%Y-%m-%d').date()
TypeError: must be string, not float
```
|
non_defect
|
typeerror must be string not float export error python exception in request typeerror must be string not floattraceback most recent call last file onadata apps viewer tasks py line in create xls export options file onadata libs utils export tools py line in generate export options options columns with hxl columns with hxl file onadata libs utils export builder py line in to xls export self pre process row row section file onadata libs utils export builder py line in pre process row value elm file onadata libs utils export builder py line in convert type return func value file onadata libs utils export builder py line in date lambda x exportbuilder string to date with xls validation x file onadata libs utils export builder py line in string to date with xls validation date obj datetime strptime date str y m d date typeerror must be string not float
| 0
|
1,503
| 2,603,966,608
|
IssuesEvent
|
2015-02-24 18:59:12
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳治疗疱疹的方法
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳治疗疱疹的方法〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:05
|
1.0
|
沈阳治疗疱疹的方法 - ```
沈阳治疗疱疹的方法〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:05
|
defect
|
沈阳治疗疱疹的方法 沈阳治疗疱疹的方法〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.