Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
197,667
| 14,938,357,350
|
IssuesEvent
|
2021-01-25 15:40:59
|
commercialhaskell/stackage
|
https://api.github.com/repos/commercialhaskell/stackage
|
opened
|
dbus test failure
|
failure: test-suite
|
(also failed in LTS 17)
```
dbus
...
Transport
...
tcp
...
invalid-bind: FAIL
tests/DBusTests/Util.hs:299:
expected exception not thrown
```
CC @rblaze
|
1.0
|
dbus test failure - (also failed in LTS 17)
```
dbus
...
Transport
...
tcp
...
invalid-bind: FAIL
tests/DBusTests/Util.hs:299:
expected exception not thrown
```
CC @rblaze
|
test
|
dbus test failure also failed in lts dbus transport tcp invalid bind fail tests dbustests util hs expected exception not thrown cc rblaze
| 1
|
178,755
| 13,794,653,884
|
IssuesEvent
|
2020-10-09 16:40:27
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
opened
|
Investigate ERR_NAME_NOT_RESOLVED Puppeteer failures in integrationt tests
|
bug ops & tools & dev team/web testing
|
This has been happening occasionally both in CI and locally for me.
Example: https://buildkite.com/sourcegraph/sourcegraph/builds/75931#d812434d-34eb-482c-afa1-3a47d420c0df/111-2489
This could be a bug in our new adapter because I don't remember this happening in the past months.
|
1.0
|
Investigate ERR_NAME_NOT_RESOLVED Puppeteer failures in integrationt tests - This has been happening occasionally both in CI and locally for me.
Example: https://buildkite.com/sourcegraph/sourcegraph/builds/75931#d812434d-34eb-482c-afa1-3a47d420c0df/111-2489
This could be a bug in our new adapter because I don't remember this happening in the past months.
|
test
|
investigate err name not resolved puppeteer failures in integrationt tests this has been happening occasionally both in ci and locally for me example this could be a bug in our new adapter because i don t remember this happening in the past months
| 1
|
11,545
| 9,395,596,361
|
IssuesEvent
|
2019-04-08 03:25:42
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Does local development function runtime support this?
|
app-service/svc azure-functions/svc cxp doc-enhancement triaged
|
It is great that Azure functions bring the runtime to developers computer to ensure they work seamlessly.. any chance you support this referencing in local settings json file?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7feaa3d0-7233-a3cb-cb9b-a9a55b1f0d7a
* Version Independent ID: f41f9d51-52f8-1f45-1043-1344a3b74458
* Content: [Key Vault references - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references)
* Content Source: [articles/app-service/app-service-key-vault-references.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/app-service-key-vault-references.md)
* Service: **app-service**
* GitHub Login: @mattchenderson
* Microsoft Alias: **mahender**
|
1.0
|
Does local development function runtime support this? - It is great that Azure functions bring the runtime to developers computer to ensure they work seamlessly.. any chance you support this referencing in local settings json file?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7feaa3d0-7233-a3cb-cb9b-a9a55b1f0d7a
* Version Independent ID: f41f9d51-52f8-1f45-1043-1344a3b74458
* Content: [Key Vault references - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references)
* Content Source: [articles/app-service/app-service-key-vault-references.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/app-service-key-vault-references.md)
* Service: **app-service**
* GitHub Login: @mattchenderson
* Microsoft Alias: **mahender**
|
non_test
|
does local development function runtime support this it is great that azure functions bring the runtime to developers computer to ensure they work seamlessly any chance you support this referencing in local settings json file document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login mattchenderson microsoft alias mahender
| 0
|
247,056
| 20,955,965,186
|
IssuesEvent
|
2022-03-27 05:02:25
|
cricarba/isolucionStatus
|
https://api.github.com/repos/cricarba/isolucionStatus
|
closed
|
🛑 naturacertTest.isolucion.co is down
|
status naturacert-test-isolucion-co
|
In [`dcdafa0`](https://github.com/cricarba/isolucionStatus/commit/dcdafa0d6e87d5a47dd094bd1683239a66b1f2a9
), naturacertTest.isolucion.co (https://naturacertTest.isolucion.co) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 naturacertTest.isolucion.co is down - In [`dcdafa0`](https://github.com/cricarba/isolucionStatus/commit/dcdafa0d6e87d5a47dd094bd1683239a66b1f2a9
), naturacertTest.isolucion.co (https://naturacertTest.isolucion.co) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
test
|
🛑 naturacerttest isolucion co is down in naturacerttest isolucion co was down http code response time ms
| 1
|
235,107
| 19,299,423,313
|
IssuesEvent
|
2021-12-13 02:12:35
|
thesofproject/sof
|
https://api.github.com/repos/thesofproject/sof
|
closed
|
[BUG]ipc timed out when CTX_SAVE on TGLU_RVP_NOCODEC_MULTICORE
|
bug IPC timeout TGL multicore Intel Linux Daily tests stress CI multicore tplg
|
**Describe the bug**
inner daily 5515?model=TGLU_RVP_NOCODEC_MULTICORE&testcase=check-suspend-resume-50
ipc timed out when CTX_SAVE
**To Reproduce**
TPLG=/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg ~/sof-test/test-case/check-suspend-resume.sh -l 500
**Environment**
Kernel Branch: topic/sof-dev
Kernel Commit: 2113dc7f
SOF Branch: main
SOF Commit: 57ee04f2d931
Topology: sof-tgl-nocodec-ci.tplg (pipeline 3,4 on core 3 ; pipeline 8 on core 2)
Platform:TGLU_RVP_NOCODEC_MULTICORE
**Screenshots or console output**
[dmesg from CI test]
```
[ 5570.205582] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x80]=0x20140000 successful
[ 5570.205607] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x40010000: GLB_PM_MSG: CTX_SAVE
[ 5570.707207] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: ipc timed out for 0x40010000 size 76
[ 5570.707212] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ctx_save ipc error -110, proceeding with suspend
[ 5570.707255] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x4]=0xf010f0f successful
[ 5570.707799] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x4]=0xf0f successful
[ 5570.707803] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: DSP core(s) enabled? 0 : core_mask 1
[ 5570.707893] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: Debug PCIR: 00000010 at 00000044
[ 5570.709008] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: Current DSP power state: D3
[ 5570.779659] kernel: ACPI: EC: interrupt blocked
[ 5575.517086] kernel: ACPI: EC: interrupt unblocked
```
while I tried it manually , I got IPC timeout on COMP_NEW message:
[dmesg]
```
[28998.489520] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x30010000: GLB_TPLG_MSG: COMP_NEW
[28998.585036] usb 3-4: cannot get connectors status: req = 0x81, wValue = 0x700, wIndex = 0xa00, type = 0
[28998.623774] atkbd serio0: Failed to deactivate keyboard on isa0060/serio0
[28998.643800] nvme nvme0: 8/0/0 default/read/poll queues
[28998.989566] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc rx: 0x90020000: GLB_TRACE_MSG
[28998.989605] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc rx done: 0x90020000: GLB_TRACE_MSG
[28998.991807] sof-audio-pci-intel-tgl 0000:00:1f.3: error: ipc timed out for 0x30010000 size 100
[28998.991819] sof-audio-pci-intel-tgl 0000:00:1f.3: preventing DSP entering D3 state to preserve context
[28998.991823] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ IPC dump start ]------------
[28998.991838] sof-audio-pci-intel-tgl 0000:00:1f.3: error: hda irq intsts 0x00000000 intlctl 0xc0000001 rirb 00
[28998.991844] sof-audio-pci-intel-tgl 0000:00:1f.3: error: dsp irq ppsts 0x00000000 adspis 0x00000000
[28998.991852] sof-audio-pci-intel-tgl 0000:00:1f.3: error: host status 0x00000000 dsp status 0x00000000 mask 0x00000003
[28998.991856] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ IPC dump end ]------------
[28998.991860] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump start ]------------
[28998.991871] sof-audio-pci-intel-tgl 0000:00:1f.3: status: fw entered - code 00000005
[28998.992240] sof-audio-pci-intel-tgl 0000:00:1f.3: error: unexpected fault 0x00000000 trace 0x00004000
[28998.992245] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump end ]------------
[28998.992250] sof-audio-pci-intel-tgl 0000:00:1f.3: error: failed to load widget PGA3.0
[28998.992254] sof-audio-pci-intel-tgl 0000:00:1f.3: error: failed to restore pipeline after resume -110
[28998.992259] PM: dpm_run_callback(): pci_pm_resume+0x0/0x80 returns -110
[28998.992275] sof-audio-pci-intel-tgl 0000:00:1f.3: PM: failed to resume async: error -110
[28999.007807] atkbd serio0: Failed to enable keyboard on isa0060/serio0
```
|
1.0
|
[BUG]ipc timed out when CTX_SAVE on TGLU_RVP_NOCODEC_MULTICORE - **Describe the bug**
inner daily 5515?model=TGLU_RVP_NOCODEC_MULTICORE&testcase=check-suspend-resume-50
ipc timed out when CTX_SAVE
**To Reproduce**
TPLG=/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg ~/sof-test/test-case/check-suspend-resume.sh -l 500
**Environment**
Kernel Branch: topic/sof-dev
Kernel Commit: 2113dc7f
SOF Branch: main
SOF Commit: 57ee04f2d931
Topology: sof-tgl-nocodec-ci.tplg (pipeline 3,4 on core 3 ; pipeline 8 on core 2)
Platform:TGLU_RVP_NOCODEC_MULTICORE
**Screenshots or console output**
[dmesg from CI test]
```
[ 5570.205582] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x80]=0x20140000 successful
[ 5570.205607] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x40010000: GLB_PM_MSG: CTX_SAVE
[ 5570.707207] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: ipc timed out for 0x40010000 size 76
[ 5570.707212] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ctx_save ipc error -110, proceeding with suspend
[ 5570.707255] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x4]=0xf010f0f successful
[ 5570.707799] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x4]=0xf0f successful
[ 5570.707803] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: DSP core(s) enabled? 0 : core_mask 1
[ 5570.707893] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: Debug PCIR: 00000010 at 00000044
[ 5570.709008] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: Current DSP power state: D3
[ 5570.779659] kernel: ACPI: EC: interrupt blocked
[ 5575.517086] kernel: ACPI: EC: interrupt unblocked
```
while I tried it manually , I got IPC timeout on COMP_NEW message:
[dmesg]
```
[28998.489520] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x30010000: GLB_TPLG_MSG: COMP_NEW
[28998.585036] usb 3-4: cannot get connectors status: req = 0x81, wValue = 0x700, wIndex = 0xa00, type = 0
[28998.623774] atkbd serio0: Failed to deactivate keyboard on isa0060/serio0
[28998.643800] nvme nvme0: 8/0/0 default/read/poll queues
[28998.989566] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc rx: 0x90020000: GLB_TRACE_MSG
[28998.989605] sof-audio-pci-intel-tgl 0000:00:1f.3: ipc rx done: 0x90020000: GLB_TRACE_MSG
[28998.991807] sof-audio-pci-intel-tgl 0000:00:1f.3: error: ipc timed out for 0x30010000 size 100
[28998.991819] sof-audio-pci-intel-tgl 0000:00:1f.3: preventing DSP entering D3 state to preserve context
[28998.991823] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ IPC dump start ]------------
[28998.991838] sof-audio-pci-intel-tgl 0000:00:1f.3: error: hda irq intsts 0x00000000 intlctl 0xc0000001 rirb 00
[28998.991844] sof-audio-pci-intel-tgl 0000:00:1f.3: error: dsp irq ppsts 0x00000000 adspis 0x00000000
[28998.991852] sof-audio-pci-intel-tgl 0000:00:1f.3: error: host status 0x00000000 dsp status 0x00000000 mask 0x00000003
[28998.991856] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ IPC dump end ]------------
[28998.991860] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump start ]------------
[28998.991871] sof-audio-pci-intel-tgl 0000:00:1f.3: status: fw entered - code 00000005
[28998.992240] sof-audio-pci-intel-tgl 0000:00:1f.3: error: unexpected fault 0x00000000 trace 0x00004000
[28998.992245] sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump end ]------------
[28998.992250] sof-audio-pci-intel-tgl 0000:00:1f.3: error: failed to load widget PGA3.0
[28998.992254] sof-audio-pci-intel-tgl 0000:00:1f.3: error: failed to restore pipeline after resume -110
[28998.992259] PM: dpm_run_callback(): pci_pm_resume+0x0/0x80 returns -110
[28998.992275] sof-audio-pci-intel-tgl 0000:00:1f.3: PM: failed to resume async: error -110
[28999.007807] atkbd serio0: Failed to enable keyboard on isa0060/serio0
```
|
test
|
ipc timed out when ctx save on tglu rvp nocodec multicore describe the bug inner daily model tglu rvp nocodec multicore testcase check suspend resume ipc timed out when ctx save to reproduce tplg lib firmware intel sof tplg sof tgl nocodec ci tplg sof test test case check suspend resume sh l environment kernel branch topic sof dev kernel commit sof branch main sof commit topology sof tgl nocodec ci tplg pipeline on core pipeline on core platform tglu rvp nocodec multicore screenshots or console output kernel sof audio pci intel tgl fw poll status reg successful kernel sof audio pci intel tgl ipc tx glb pm msg ctx save kernel sof audio pci intel tgl error ipc timed out for size kernel sof audio pci intel tgl ctx save ipc error proceeding with suspend kernel sof audio pci intel tgl fw poll status reg successful kernel sof audio pci intel tgl fw poll status reg successful kernel sof audio pci intel tgl dsp core s enabled core mask kernel sof audio pci intel tgl debug pcir at kernel sof audio pci intel tgl current dsp power state kernel acpi ec interrupt blocked kernel acpi ec interrupt unblocked while i tried it manually i got ipc timeout on comp new message sof audio pci intel tgl ipc tx glb tplg msg comp new usb cannot get connectors status req wvalue windex type atkbd failed to deactivate keyboard on nvme default read poll queues sof audio pci intel tgl ipc rx glb trace msg sof audio pci intel tgl ipc rx done glb trace msg sof audio pci intel tgl error ipc timed out for size sof audio pci intel tgl preventing dsp entering state to preserve context sof audio pci intel tgl sof audio pci intel tgl error hda irq intsts intlctl rirb sof audio pci intel tgl error dsp irq ppsts adspis sof audio pci intel tgl error host status dsp status mask sof audio pci intel tgl sof audio pci intel tgl sof audio pci intel tgl status fw entered code sof audio pci intel tgl error unexpected fault trace sof audio pci intel tgl sof audio pci intel tgl error failed to load widget sof audio pci intel tgl error failed to restore pipeline after resume pm dpm run callback pci pm resume returns sof audio pci intel tgl pm failed to resume async error atkbd failed to enable keyboard on
| 1
|
239,435
| 19,897,375,597
|
IssuesEvent
|
2022-01-25 01:39:52
|
DnD-Montreal/session-tome
|
https://api.github.com/repos/DnD-Montreal/session-tome
|
opened
|
Create Cypress tests for Manage Entries
|
acceptance test
|
## Description
Write E2E Cypress tests for #106 Manage Entries.
## Possible Implementation
- Create a character entry
- Edit a character entry
- Delete a character entry
- View character entries
|
1.0
|
Create Cypress tests for Manage Entries - ## Description
Write E2E Cypress tests for #106 Manage Entries.
## Possible Implementation
- Create a character entry
- Edit a character entry
- Delete a character entry
- View character entries
|
test
|
create cypress tests for manage entries description write cypress tests for manage entries possible implementation create a character entry edit a character entry delete a character entry view character entries
| 1
|
7,920
| 2,942,215,758
|
IssuesEvent
|
2015-07-02 13:09:48
|
molgenis/molgenis
|
https://api.github.com/repos/molgenis/molgenis
|
opened
|
IE9 dataexplorer shows scrollbar overlaying the headers
|
bug dataexplorer InternetExplorer9 release test v15.07
|
For a dataset with 0 items of data, the scrollbar in IE9 is laying over the headers in the table, those are therefore unreadable.
|
1.0
|
IE9 dataexplorer shows scrollbar overlaying the headers - For a dataset with 0 items of data, the scrollbar in IE9 is laying over the headers in the table, those are therefore unreadable.
|
test
|
dataexplorer shows scrollbar overlaying the headers for a dataset with items of data the scrollbar in is laying over the headers in the table those are therefore unreadable
| 1
|
21,389
| 10,606,791,384
|
IssuesEvent
|
2019-10-11 00:55:27
|
benchmarkdebricked/thimble.mozilla.org
|
https://api.github.com/repos/benchmarkdebricked/thimble.mozilla.org
|
opened
|
CVE-2019-1010266 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.3.1.tgz</b>, <b>lodash-2.2.1.tgz</b>, <b>lodash-3.2.0.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.3.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz">https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/sql/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- sequelize-1.7.10.tgz (Root Library)
- sql-0.35.0.tgz
- :x: **lodash-1.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.2.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.2.1.tgz">https://registry.npmjs.org/lodash/-/lodash-2.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/node_modules/webmaker-i18n/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/node_modules/webmaker-i18n/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- webmaker-i18n-0.3.32.tgz (Root Library)
- :x: **lodash-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.2.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/node_modules/xmlbuilder/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/node_modules/xmlbuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- jscs-1.11.3.tgz (Root Library)
- xmlbuilder-2.5.2.tgz
- :x: **lodash-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/publish.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/grunt/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jsbeautifier-0.2.8.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/thimble.mozilla.org/commit/84b7cf7fc74ac0e17e5e4bc599da92283f9cd37f">84b7cf7fc74ac0e17e5e4bc599da92283f9cd37f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-1010266 (Medium) detected in multiple libraries - ## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.3.1.tgz</b>, <b>lodash-2.2.1.tgz</b>, <b>lodash-3.2.0.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.3.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz">https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/sql/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- sequelize-1.7.10.tgz (Root Library)
- sql-0.35.0.tgz
- :x: **lodash-1.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.2.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.2.1.tgz">https://registry.npmjs.org/lodash/-/lodash-2.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/node_modules/webmaker-i18n/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/node_modules/webmaker-i18n/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- webmaker-i18n-0.3.32.tgz (Root Library)
- :x: **lodash-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.2.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/node_modules/xmlbuilder/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/node_modules/xmlbuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- jscs-1.11.3.tgz (Root Library)
- xmlbuilder-2.5.2.tgz
- :x: **lodash-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/publish.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/eslint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/grunt/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jsbeautifier-0.2.8.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/thimble.mozilla.org/commit/84b7cf7fc74ac0e17e5e4bc599da92283f9cd37f">84b7cf7fc74ac0e17e5e4bc599da92283f9cd37f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm thimble mozilla org services login webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services login webmaker org node modules sql node modules lodash package json dependency hierarchy sequelize tgz root library sql tgz x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file tmp ws scm thimble mozilla org services login webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org node modules webmaker node modules lodash package json tmp ws scm thimble mozilla org node modules webmaker node modules lodash package json dependency hierarchy webmaker tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file tmp ws scm thimble mozilla org services id webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services id webmaker org node modules xmlbuilder node modules lodash package json tmp ws scm thimble mozilla org services id webmaker org node modules xmlbuilder node modules lodash package json dependency hierarchy jscs tgz root library xmlbuilder tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file tmp ws scm thimble mozilla org services publish webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services login webmaker org node modules eslint node modules lodash package json tmp ws scm thimble mozilla org services login webmaker org node modules eslint node modules lodash package json thimble mozilla org services login webmaker org node modules eslint node modules lodash package json tmp ws scm thimble mozilla org services login webmaker org node modules eslint node modules lodash package json dependency hierarchy x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm thimble mozilla org services login webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services login webmaker org node modules grunt node modules lodash package json dependency hierarchy grunt tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file tmp ws scm thimble mozilla org services login webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services login webmaker org node modules lodash package json dependency hierarchy grunt jsbeautifier tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
466,389
| 13,401,251,831
|
IssuesEvent
|
2020-09-03 17:00:53
|
open-telemetry/opentelemetry-java
|
https://api.github.com/repos/open-telemetry/opentelemetry-java
|
closed
|
Probability sampler only respects positive parent decision
|
Bug help wanted priority:p2 release:required-for-ga
|
~According to [the spec](https://github.com/open-telemetry/opentelemetry-specification/blob/18b2752ebe6c7f0cdd8c7b2bcbdceb0ae3f5ad95/specification/trace/sdk.md#built-in-samplers), the probability sampler's "default behavior should be to trust the parent `SampledFlag`"~ (EDIT: See https://github.com/open-telemetry/opentelemetry-java/issues/1225#issuecomment-668224260). However, in Java, the decision is only respected if sampled=true. If sampled=false, the trace is potentially restarted with the sampler's probability. IMHO, notwithstanding any spec-conformance, this behavior is very weird and surprising.
The responsible condition is this:
https://github.com/open-telemetry/opentelemetry-java/blob/d7f6d5a64176b44752bbd4bd403716a0f150b3e6/sdk/src/main/java/io/opentelemetry/sdk/trace/Samplers.java#L195
|
1.0
|
Probability sampler only respects positive parent decision - ~According to [the spec](https://github.com/open-telemetry/opentelemetry-specification/blob/18b2752ebe6c7f0cdd8c7b2bcbdceb0ae3f5ad95/specification/trace/sdk.md#built-in-samplers), the probability sampler's "default behavior should be to trust the parent `SampledFlag`"~ (EDIT: See https://github.com/open-telemetry/opentelemetry-java/issues/1225#issuecomment-668224260). However, in Java, the decision is only respected if sampled=true. If sampled=false, the trace is potentially restarted with the sampler's probability. IMHO, notwithstanding any spec-conformance, this behavior is very weird and surprising.
The responsible condition is this:
https://github.com/open-telemetry/opentelemetry-java/blob/d7f6d5a64176b44752bbd4bd403716a0f150b3e6/sdk/src/main/java/io/opentelemetry/sdk/trace/Samplers.java#L195
|
non_test
|
probability sampler only respects positive parent decision according to the probability sampler s default behavior should be to trust the parent sampledflag edit see however in java the decision is only respected if sampled true if sampled false the trace is potentially restarted with the sampler s probability imho notwithstanding any spec conformance this behavior is very weird and surprising the responsible condition is this
| 0
|
85,512
| 7,974,799,772
|
IssuesEvent
|
2018-07-17 07:16:40
|
edenlabllc/ehealth.api
|
https://api.github.com/repos/edenlabllc/ehealth.api
|
closed
|
DBConnection Error 500 /api/declarations
|
kind/bug status/test
|
we have a number of new errors on prod.
Might be caused by the migrations/indexes.
```
"phoenix": {
"request_id": "697e5759-9799-4e34-9ca1-e0741b229e57#24686",
"message": "** (DBConnection.ConnectionError) tcp recv: closed\n (ecto) lib/ecto/adapters/postgres/connection.ex:108: Ecto.Adapters.Postgres.Connection.execute/4\n (ecto) lib/ecto/adapters/sql.ex:256: Ecto.Adapters.SQL.sql_call/6\n (ecto) lib/ecto/adapters/sql.ex:436: Ecto.Adapters.SQL.execute_or_reset/7\n (ecto) lib/ecto/repo/queryable.ex:133: Ecto.Repo.Queryable.execute/5\n (ecto) lib/ecto/repo/queryable.ex:37: Ecto.Repo.Queryable.all/4\n (scrivener_ecto) lib/scrivener/paginater/ecto/query.ex:15: Scrivener.Paginater.Ecto.Query.paginate/2\n (ops) lib/ops/web/controllers/declaration_controller.ex:13: OPS.Web.DeclarationController.index/2\n (ops) lib/ops/web/controllers/declaration_controller.ex:1: OPS.Web.DeclarationController.action/2\n",
"log_type": "error"
},
```
@AlexKovalevych @vsirius
|
1.0
|
DBConnection Error 500 /api/declarations - we have a number of new errors on prod.
Might be caused by the migrations/indexes.
```
"phoenix": {
"request_id": "697e5759-9799-4e34-9ca1-e0741b229e57#24686",
"message": "** (DBConnection.ConnectionError) tcp recv: closed\n (ecto) lib/ecto/adapters/postgres/connection.ex:108: Ecto.Adapters.Postgres.Connection.execute/4\n (ecto) lib/ecto/adapters/sql.ex:256: Ecto.Adapters.SQL.sql_call/6\n (ecto) lib/ecto/adapters/sql.ex:436: Ecto.Adapters.SQL.execute_or_reset/7\n (ecto) lib/ecto/repo/queryable.ex:133: Ecto.Repo.Queryable.execute/5\n (ecto) lib/ecto/repo/queryable.ex:37: Ecto.Repo.Queryable.all/4\n (scrivener_ecto) lib/scrivener/paginater/ecto/query.ex:15: Scrivener.Paginater.Ecto.Query.paginate/2\n (ops) lib/ops/web/controllers/declaration_controller.ex:13: OPS.Web.DeclarationController.index/2\n (ops) lib/ops/web/controllers/declaration_controller.ex:1: OPS.Web.DeclarationController.action/2\n",
"log_type": "error"
},
```
@AlexKovalevych @vsirius
|
test
|
dbconnection error api declarations we have a number of new errors on prod might be caused by the migrations indexes phoenix request id message dbconnection connectionerror tcp recv closed n ecto lib ecto adapters postgres connection ex ecto adapters postgres connection execute n ecto lib ecto adapters sql ex ecto adapters sql sql call n ecto lib ecto adapters sql ex ecto adapters sql execute or reset n ecto lib ecto repo queryable ex ecto repo queryable execute n ecto lib ecto repo queryable ex ecto repo queryable all n scrivener ecto lib scrivener paginater ecto query ex scrivener paginater ecto query paginate n ops lib ops web controllers declaration controller ex ops web declarationcontroller index n ops lib ops web controllers declaration controller ex ops web declarationcontroller action n log type error alexkovalevych vsirius
| 1
|
150,132
| 11,948,731,671
|
IssuesEvent
|
2020-04-03 12:26:14
|
the-canonizer/canonizer.2.0
|
https://api.github.com/repos/the-canonizer/canonizer.2.0
|
closed
|
robots.txt. Needs to disallow all history pages.
|
17th Release Fixed bug ready to test
|
Only live content should show up in Google searches. No history.
|
1.0
|
robots.txt. Needs to disallow all history pages. -
Only live content should show up in Google searches. No history.
|
test
|
robots txt needs to disallow all history pages only live content should show up in google searches no history
| 1
|
159,610
| 12,483,745,676
|
IssuesEvent
|
2020-05-30 11:06:44
|
OpenArchive/Save-app-android
|
https://api.github.com/repos/OpenArchive/Save-app-android
|
closed
|
Request from potential partner - auto-login for org. account at IA
|
PLEASE TEST
|
Could we release a version of the App. that has the login built in to the IA so that everyone from one org can just auto-upload to the IA and then access it online through the same account?
|
1.0
|
Request from potential partner - auto-login for org. account at IA - Could we release a version of the App. that has the login built in to the IA so that everyone from one org can just auto-upload to the IA and then access it online through the same account?
|
test
|
request from potential partner auto login for org account at ia could we release a version of the app that has the login built in to the ia so that everyone from one org can just auto upload to the ia and then access it online through the same account
| 1
|
104,734
| 16,621,067,874
|
IssuesEvent
|
2021-06-03 01:08:01
|
rvvergara/react-native-blog
|
https://api.github.com/repos/rvvergara/react-native-blog
|
opened
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz
|
security vulnerability
|
## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: react-native-blog/package.json</p>
<p>Path to vulnerable library: react-native-blog/node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.61.5.tgz (Root Library)
- cli-3.2.1.tgz
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz - ## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: react-native-blog/package.json</p>
<p>Path to vulnerable library: react-native-blog/node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.61.5.tgz (Root Library)
- cli-3.2.1.tgz
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in node notifier tgz cve medium severity vulnerability vulnerable library node notifier tgz a node js module for sending notifications on native mac windows post and pre and linux or growl as fallback library home page a href path to dependency file react native blog package json path to vulnerable library react native blog node modules node notifier package json dependency hierarchy react native tgz root library cli tgz x node notifier tgz vulnerable library found in base branch master vulnerability details this affects the package node notifier before it allows an attacker to run arbitrary commands on linux machines due to the options params not being sanitised when being passed an array publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
386,460
| 26,684,302,305
|
IssuesEvent
|
2023-01-26 20:26:45
|
bcgov/flight-path-monitoring
|
https://api.github.com/repos/bcgov/flight-path-monitoring
|
closed
|
Nomenclature of incursion severity
|
documentation
|
How would you name the incursion zones?
Let's say directly in the habitat areas below the altitude tolerance?
The first buffer zone around the original area?
The second buffer zone.
The code currently uses:
https://github.com/bcgov/flight-path-monitoring/blob/eb264baea8204e65c471e8b05045f998fac8a8a4/py/flightPathAnalysis_uwr.py#L399
We will use that for now.
|
1.0
|
Nomenclature of incursion severity - How would you name the incursion zones?
Let's say directly in the habitat areas below the altitude tolerance?
The first buffer zone around the original area?
The second buffer zone.
The code currently uses:
https://github.com/bcgov/flight-path-monitoring/blob/eb264baea8204e65c471e8b05045f998fac8a8a4/py/flightPathAnalysis_uwr.py#L399
We will use that for now.
|
non_test
|
nomenclature of incursion severity how would you name the incursion zones let s say directly in the habitat areas below the altitude tolerance the first buffer zone around the original area the second buffer zone the code currently uses we will use that for now
| 0
|
97,829
| 8,670,894,372
|
IssuesEvent
|
2018-11-29 17:37:15
|
SME-Issues/issues
|
https://api.github.com/repos/SME-Issues/issues
|
closed
|
Query Payment Tests Comprehension None - 29/11/2018 - 5004
|
NLP Api pulse_tests
|
**Query Payment Tests Comprehension None**
- Total: 26
- Passed: 12
- **Full Pass: 12 (50%)**
- Not Understood: 0
- Error (not understood): 2
- Failed but Understood: 12 (50%)
|
1.0
|
Query Payment Tests Comprehension None - 29/11/2018 - 5004 - **Query Payment Tests Comprehension None**
- Total: 26
- Passed: 12
- **Full Pass: 12 (50%)**
- Not Understood: 0
- Error (not understood): 2
- Failed but Understood: 12 (50%)
|
test
|
query payment tests comprehension none query payment tests comprehension none total passed full pass not understood error not understood failed but understood
| 1
|
350,269
| 31,876,964,974
|
IssuesEvent
|
2023-09-16 00:38:11
|
CollinHeist/TitleCardMaker-Blueprints
|
https://api.github.com/repos/CollinHeist/TitleCardMaker-Blueprints
|
closed
|
[Blueprint] The Good Place
|
blueprint created passed-tests
|
### Series Name
The Good Place
### Series Year
2016
### Creator Username
_No response_
### Blueprint Description
Overline card with the show's yellow as the line color.
### Blueprint
```json
{
"series": {
"card_type": "overline",
"extra_keys": [
"line_color"
],
"extra_values": [
"rgb(250,227,76)"
],
"template_ids": []
},
"episodes": {},
"templates": [],
"fonts": [],
"preview": "preview.jpg"
}
```
### Preview Title Card

### Zip of Font Files
_No response_
|
1.0
|
[Blueprint] The Good Place - ### Series Name
The Good Place
### Series Year
2016
### Creator Username
_No response_
### Blueprint Description
Overline card with the show's yellow as the line color.
### Blueprint
```json
{
"series": {
"card_type": "overline",
"extra_keys": [
"line_color"
],
"extra_values": [
"rgb(250,227,76)"
],
"template_ids": []
},
"episodes": {},
"templates": [],
"fonts": [],
"preview": "preview.jpg"
}
```
### Preview Title Card

### Zip of Font Files
_No response_
|
test
|
the good place series name the good place series year creator username no response blueprint description overline card with the show s yellow as the line color blueprint json series card type overline extra keys line color extra values rgb template ids episodes templates fonts preview preview jpg preview title card zip of font files no response
| 1
|
22,969
| 20,761,795,085
|
IssuesEvent
|
2022-03-15 16:47:03
|
FStarLang/FStar
|
https://api.github.com/repos/FStarLang/FStar
|
closed
|
Lax type checking more lax than expected
|
status/wont-fix area/error-messages area/usability
|
I'm documenting this behaviour which may come across as surprising:
```F#
module M
type u (n:int) =
(match n with
| 0 -> bool
| _ -> bool)
val f: n:int -> u n
let f _ = ()
```
```
$ fstar.exe --lax --debug_level Extreme --debug M M.fst
...
Computed return type unit; expected type (u uu___)
check_and_ascribe: type is unit<:(u uu___) guard is
{guard_f=(has_type () (u uu___));
deferred={
};
univ_ineqs={Solving for {}; inequalities are {}};
implicits={}}
, {}
...
All verification conditions discharged successfully
```
What's happening is that F* queries Z3 to decide whether `()` has type `u n`, but in `--lax` mode Z3 queries are admitted. Without `--lax`, the query is emitted and fails as expected:
```
Encoding query formula: (l_Forall (fun uu___ -> (has_type () (u uu___@0))))
Encoding binders uu___
Z3 says: unknown
```
|
True
|
Lax type checking more lax than expected - I'm documenting this behaviour which may come across as surprising:
```F#
module M
type u (n:int) =
(match n with
| 0 -> bool
| _ -> bool)
val f: n:int -> u n
let f _ = ()
```
```
$ fstar.exe --lax --debug_level Extreme --debug M M.fst
...
Computed return type unit; expected type (u uu___)
check_and_ascribe: type is unit<:(u uu___) guard is
{guard_f=(has_type () (u uu___));
deferred={
};
univ_ineqs={Solving for {}; inequalities are {}};
implicits={}}
, {}
...
All verification conditions discharged successfully
```
What's happening is that F* queries Z3 to decide whether `()` has type `u n`, but in `--lax` mode Z3 queries are admitted. Without `--lax`, the query is emitted and fails as expected:
```
Encoding query formula: (l_Forall (fun uu___ -> (has_type () (u uu___@0))))
Encoding binders uu___
Z3 says: unknown
```
|
non_test
|
lax type checking more lax than expected i m documenting this behaviour which may come across as surprising f module m type u n int match n with bool bool val f n int u n let f fstar exe lax debug level extreme debug m m fst computed return type unit expected type u uu check and ascribe type is unit u uu guard is guard f has type u uu deferred univ ineqs solving for inequalities are implicits all verification conditions discharged successfully what s happening is that f queries to decide whether has type u n but in lax mode queries are admitted without lax the query is emitted and fails as expected encoding query formula l forall fun uu has type u uu encoding binders uu says unknown
| 0
|
279,041
| 24,194,196,959
|
IssuesEvent
|
2022-09-23 21:08:11
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
The QA table displays as broken if the original is an Azure AD attached table without assigning rbac role to its associated azure account
|
:heavy_check_mark: merged 🧪 testing :gear: tables :gear: quick access
|
**Storage Explorer Version**: 1.26.0-dev
**Build Number**: 20220920.2
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 22.04/MacOS Monterey 12.5.1 (Apple M1 Pro)
**Reproduce Languages**: All languages
**Architecture**: ia32/x64
**How Found**: Exploratory testing
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a table -> Copy the URL of the table.
3. Open the connect dialog -> Click 'Table -> Sign in using Azure Active Directory (Azure AD)' -> Click 'Next'.
4. Select one Azure account (Not assign rbac role)-> Click 'Next'.
5. Paste the URL -> Connect the table.
6. The attached table auto opens and there is an error info bar indicating 'AuthorizationPermissionMismatch'.
7. Right click the attached table -> Click 'Pin to Quick Access'.
8. Check the QA table shows well.
## Expected Experience ##
The QA table shows well.
## Actual Experience ##
The QA table shows as broken.
Here is the error details:

## Additional Context ##
1. This issue doesn't reproduce for blob containers/queues.
2. The Azure account associated with the azure ad attached hasn't been assigned the rbac role.
|
1.0
|
The QA table displays as broken if the original is an Azure AD attached table without assigning rbac role to its associated azure account - **Storage Explorer Version**: 1.26.0-dev
**Build Number**: 20220920.2
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 22.04/MacOS Monterey 12.5.1 (Apple M1 Pro)
**Reproduce Languages**: All languages
**Architecture**: ia32/x64
**How Found**: Exploratory testing
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a table -> Copy the URL of the table.
3. Open the connect dialog -> Click 'Table -> Sign in using Azure Active Directory (Azure AD)' -> Click 'Next'.
4. Select one Azure account (Not assign rbac role)-> Click 'Next'.
5. Paste the URL -> Connect the table.
6. The attached table auto opens and there is an error info bar indicating 'AuthorizationPermissionMismatch'.
7. Right click the attached table -> Click 'Pin to Quick Access'.
8. Check the QA table shows well.
## Expected Experience ##
The QA table shows well.
## Actual Experience ##
The QA table shows as broken.
Here is the error details:

## Additional Context ##
1. This issue doesn't reproduce for blob containers/queues.
2. The Azure account associated with the azure ad attached hasn't been assigned the rbac role.
|
test
|
the qa table displays as broken if the original is an azure ad attached table without assigning rbac role to its associated azure account storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey apple pro reproduce languages all languages architecture how found exploratory testing regression from not a regression steps to reproduce expand one storage account tables create a table copy the url of the table open the connect dialog click table sign in using azure active directory azure ad click next select one azure account not assign rbac role click next paste the url connect the table the attached table auto opens and there is an error info bar indicating authorizationpermissionmismatch right click the attached table click pin to quick access check the qa table shows well expected experience the qa table shows well actual experience the qa table shows as broken here is the error details additional context this issue doesn t reproduce for blob containers queues the azure account associated with the azure ad attached hasn t been assigned the rbac role
| 1
|
108,885
| 9,334,891,083
|
IssuesEvent
|
2019-03-28 17:17:43
|
SNLComputation/Albany
|
https://api.github.com/repos/SNLComputation/Albany
|
closed
|
Compilation errors due to libma.a
|
SCOREC Testing question
|
It seems in the past few days, compilations errors due to routines in the libma.a libray have crept in:
```
[ 97%] Built target BifurcationTest
Scanning dependencies of target MeshComponents
[ 97%] Building CXX object src/LCM/CMakeFiles/MeshComponents.dir/test/utils/MeshComponents.cpp.o
/.../test/TrilinosInstall/lib/libma.a(maShape.cc.o): In function `ma::fixLargeAngleTets(ma::Adapt*)':
maShape.cc:(.text+0x1476): undefined reference to `ma::SingleSplitCollapse::SingleSplitCollapse(ma::Adapt*)'
```
Does anyone know what is this library so we can assign this issue to the right person?
|
1.0
|
Compilation errors due to libma.a - It seems in the past few days, compilations errors due to routines in the libma.a libray have crept in:
```
[ 97%] Built target BifurcationTest
Scanning dependencies of target MeshComponents
[ 97%] Building CXX object src/LCM/CMakeFiles/MeshComponents.dir/test/utils/MeshComponents.cpp.o
/.../test/TrilinosInstall/lib/libma.a(maShape.cc.o): In function `ma::fixLargeAngleTets(ma::Adapt*)':
maShape.cc:(.text+0x1476): undefined reference to `ma::SingleSplitCollapse::SingleSplitCollapse(ma::Adapt*)'
```
Does anyone know what is this library so we can assign this issue to the right person?
|
test
|
compilation errors due to libma a it seems in the past few days compilations errors due to routines in the libma a libray have crept in built target bifurcationtest scanning dependencies of target meshcomponents building cxx object src lcm cmakefiles meshcomponents dir test utils meshcomponents cpp o test trilinosinstall lib libma a mashape cc o in function ma fixlargeangletets ma adapt mashape cc text undefined reference to ma singlesplitcollapse singlesplitcollapse ma adapt does anyone know what is this library so we can assign this issue to the right person
| 1
|
79,622
| 9,921,911,995
|
IssuesEvent
|
2019-06-30 22:36:15
|
cowboy8625/WordRPG
|
https://api.github.com/repos/cowboy8625/WordRPG
|
closed
|
Items
|
design
|
- [ ] Design base classes for weapons
- [ ] Design base classes for armor/wearables
- [ ] Design base classes for consumables (food, potions, etc.)
|
1.0
|
Items - - [ ] Design base classes for weapons
- [ ] Design base classes for armor/wearables
- [ ] Design base classes for consumables (food, potions, etc.)
|
non_test
|
items design base classes for weapons design base classes for armor wearables design base classes for consumables food potions etc
| 0
|
407,898
| 11,938,981,412
|
IssuesEvent
|
2020-04-02 14:34:12
|
BayviewComputerClub/smoothie-web
|
https://api.github.com/repos/BayviewComputerClub/smoothie-web
|
opened
|
Rewrite in kotlin
|
low priority
|
Rewrite in kotlin to reduce the code size, as some of the classes currently look disastrous
Spring integration: https://spring.io/blog/2019/04/12/going-reactive-with-spring-coroutines-and-kotlin-flow
|
1.0
|
Rewrite in kotlin - Rewrite in kotlin to reduce the code size, as some of the classes currently look disastrous
Spring integration: https://spring.io/blog/2019/04/12/going-reactive-with-spring-coroutines-and-kotlin-flow
|
non_test
|
rewrite in kotlin rewrite in kotlin to reduce the code size as some of the classes currently look disastrous spring integration
| 0
|
295,459
| 25,477,681,250
|
IssuesEvent
|
2022-11-25 16:05:50
|
ascopes/java-compiler-testing
|
https://api.github.com/repos/ascopes/java-compiler-testing
|
closed
|
Micronaut acceptance tests
|
testing good first issue help needed 👕 medium
|
Micronaut seems to use the annotation processor API internally.
If possible, we should work out exactly how that gets invoked and add an acceptance test for it!
|
1.0
|
Micronaut acceptance tests - Micronaut seems to use the annotation processor API internally.
If possible, we should work out exactly how that gets invoked and add an acceptance test for it!
|
test
|
micronaut acceptance tests micronaut seems to use the annotation processor api internally if possible we should work out exactly how that gets invoked and add an acceptance test for it
| 1
|
100,567
| 4,098,560,646
|
IssuesEvent
|
2016-06-03 08:54:29
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
Faq: Reuters story
|
new feature Priority-Medium
|
under the question 'Has HDX been featured in the media?'
Reuters - From displacement to death, UN data innovation aims to boost aid response
Hyperlink the title to here: http://af.reuters.com/article/commoditiesNews/idAFL5N18N1U1
|
1.0
|
Faq: Reuters story - under the question 'Has HDX been featured in the media?'
Reuters - From displacement to death, UN data innovation aims to boost aid response
Hyperlink the title to here: http://af.reuters.com/article/commoditiesNews/idAFL5N18N1U1
|
non_test
|
faq reuters story under the question has hdx been featured in the media reuters from displacement to death un data innovation aims to boost aid response hyperlink the title to here
| 0
|
5,285
| 3,551,235,934
|
IssuesEvent
|
2016-01-21 02:05:10
|
ChurchCRM/CRM
|
https://api.github.com/repos/ChurchCRM/CRM
|
closed
|
Some resources load over HTTP (not s)
|
bug build
|
Some resources are hard coded to load over HTTP, not HTTPS. This breaks rendering of Google Maps #38, as well as Charts.
|
1.0
|
Some resources load over HTTP (not s) - Some resources are hard coded to load over HTTP, not HTTPS. This breaks rendering of Google Maps #38, as well as Charts.
|
non_test
|
some resources load over http not s some resources are hard coded to load over http not https this breaks rendering of google maps as well as charts
| 0
|
9,943
| 7,042,621,510
|
IssuesEvent
|
2017-12-30 15:45:56
|
dannypsnl/redux
|
https://api.github.com/repos/dannypsnl/redux
|
closed
|
[Drop feature]DispatchC
|
performance TODO wontfix
|
As title, the reason is two API cause confusion.
And concurrency code can enjoy the improvement from Go or computer or System.
So I decide to drop the sequential version.
|
True
|
[Drop feature]DispatchC - As title, the reason is two API cause confusion.
And concurrency code can enjoy the improvement from Go or computer or System.
So I decide to drop the sequential version.
|
non_test
|
dispatchc as title the reason is two api cause confusion and concurrency code can enjoy the improvement from go or computer or system so i decide to drop the sequential version
| 0
|
306,193
| 9,382,046,477
|
IssuesEvent
|
2019-04-04 21:13:11
|
wherebyus/general-tasks
|
https://api.github.com/repos/wherebyus/general-tasks
|
closed
|
The sponsor placement editor does not seem editable
|
Priority: High Product: Newsletters Severity: High Type: Bug
|
## Feature or problem
Describe the feature or problem here.
### Reproduction
An issue with this sponsor placement came up in #1084, where clicking buttons in the editor doesn't do anything ~~(no console errors, nothin')~~. I was able to resolve #1084 by editing the database directly. https://theevergrey.com/wp-admin/post.php?post=25437&action=edit
### Actual behavior
The following chrome error appears:
`An invalid form control with name='_wbu_sponsor_url_support' is not focusable.`
Possible explanation of the issue is here: https://tiffanybbrown.com/2015/11/an-invalid-form-control-is-not-focusable/index.html
### Suggested expected behavior
### Suggested priority
priority-high
### Stakeholders
*Submitted:* michael
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 0
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
|
1.0
|
The sponsor placement editor does not seem editable - ## Feature or problem
Describe the feature or problem here.
### Reproduction
An issue with this sponsor placement came up in #1084, where clicking buttons in the editor doesn't do anything ~~(no console errors, nothin')~~. I was able to resolve #1084 by editing the database directly. https://theevergrey.com/wp-admin/post.php?post=25437&action=edit
### Actual behavior
The following chrome error appears:
`An invalid form control with name='_wbu_sponsor_url_support' is not focusable.`
Possible explanation of the issue is here: https://tiffanybbrown.com/2015/11/an-invalid-form-control-is-not-focusable/index.html
### Suggested expected behavior
### Suggested priority
priority-high
### Stakeholders
*Submitted:* michael
### Definition of done
How will we know when this feature is complete?
### Subtasks
A detailed list of changes that need to be made or subtasks. One checkbox per.
- [ ] Brew the coffee
## Developer estimate
To help the team accurately estimate the complexity of this task,
take a moment to walk through this list and estimate each item. At the end, you can total
the estimates and round to the nearest prime number.
If any of these are at a `5` or higher, or if the total is above a `5`, consider breaking
this issue into multiple smaller issues.
- [ ] Changes to the database ()
- [ ] Changes to the API ()
- [ ] Testing Changes to the API ()
- [ ] Changes to Application Code ()
- [ ] Adding or updating unit tests ()
- [ ] Local developer testing ()
### Total developer estimate: 0
## Additional estimate
- [ ] Code review ()
- [ ] QA Testing ()
- [ ] Stakeholder Sign-off ()
- [ ] Deploy to Production ()
### Total additional estimate: 0
## QA Notes
Detailed instructions for testing, one checkbox per test to be completed.
### Contextual tests
- [ ] Accessibility check
- [ ] Cross-browser check (Edge, Chrome, Firefox)
- [ ] Responsive check
|
non_test
|
the sponsor placement editor does not seem editable feature or problem describe the feature or problem here reproduction an issue with this sponsor placement came up in where clicking buttons in the editor doesn t do anything no console errors nothin i was able to resolve by editing the database directly actual behavior the following chrome error appears an invalid form control with name wbu sponsor url support is not focusable possible explanation of the issue is here suggested expected behavior suggested priority priority high stakeholders submitted michael definition of done how will we know when this feature is complete subtasks a detailed list of changes that need to be made or subtasks one checkbox per brew the coffee developer estimate to help the team accurately estimate the complexity of this task take a moment to walk through this list and estimate each item at the end you can total the estimates and round to the nearest prime number if any of these are at a or higher or if the total is above a consider breaking this issue into multiple smaller issues changes to the database changes to the api testing changes to the api changes to application code adding or updating unit tests local developer testing total developer estimate additional estimate code review qa testing stakeholder sign off deploy to production total additional estimate qa notes detailed instructions for testing one checkbox per test to be completed contextual tests accessibility check cross browser check edge chrome firefox responsive check
| 0
|
170,632
| 20,883,789,096
|
IssuesEvent
|
2022-03-23 01:13:04
|
snowdensb/dependabot-core
|
https://api.github.com/repos/snowdensb/dependabot-core
|
reopened
|
CVE-2019-13116 (High) detected in commons-collections-3.2.1.jar
|
security vulnerability
|
## CVE-2019-13116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: /maven/spec/fixtures/poms/nested_property_url_pom.xml</p>
<p>Path to vulnerable library: /20210723183946_MRSDVV/downloadResource_VGXKRV/20210723190730/commons-collections-3.2.1.jar,/20210723183946_MRSDVV/downloadResource_VGXKRV/20210723190717/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/dependabot-core/commit/ba8cd9078c8ce0cb202767d627706711237abf71">ba8cd9078c8ce0cb202767d627706711237abf71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MuleSoft Mule Community Edition runtime engine before 3.8 allows remote attackers to execute arbitrary code because of Java Deserialization, related to Apache Commons Collections
<p>Publish Date: 2019-10-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-13116>CVE-2019-13116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13116</a></p>
<p>Release Date: 2019-10-29</p>
<p>Fix Resolution: commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","packageFilePaths":["/maven/spec/fixtures/poms/nested_property_url_pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections:3.2.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-13116","vulnerabilityDetails":"The MuleSoft Mule Community Edition runtime engine before 3.8 allows remote attackers to execute arbitrary code because of Java Deserialization, related to Apache Commons Collections","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-13116","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-13116 (High) detected in commons-collections-3.2.1.jar - ## CVE-2019-13116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: /maven/spec/fixtures/poms/nested_property_url_pom.xml</p>
<p>Path to vulnerable library: /20210723183946_MRSDVV/downloadResource_VGXKRV/20210723190730/commons-collections-3.2.1.jar,/20210723183946_MRSDVV/downloadResource_VGXKRV/20210723190717/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/dependabot-core/commit/ba8cd9078c8ce0cb202767d627706711237abf71">ba8cd9078c8ce0cb202767d627706711237abf71</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MuleSoft Mule Community Edition runtime engine before 3.8 allows remote attackers to execute arbitrary code because of Java Deserialization, related to Apache Commons Collections
<p>Publish Date: 2019-10-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-13116>CVE-2019-13116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13116</a></p>
<p>Release Date: 2019-10-29</p>
<p>Fix Resolution: commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","packageFilePaths":["/maven/spec/fixtures/poms/nested_property_url_pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections:3.2.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-13116","vulnerabilityDetails":"The MuleSoft Mule Community Edition runtime engine before 3.8 allows remote attackers to execute arbitrary code because of Java Deserialization, related to Apache Commons Collections","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-13116","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file maven spec fixtures poms nested property url pom xml path to vulnerable library mrsdvv downloadresource vgxkrv commons collections jar mrsdvv downloadresource vgxkrv commons collections jar dependency hierarchy x commons collections jar vulnerable library found in head commit a href found in base branch main vulnerability details the mulesoft mule community edition runtime engine before allows remote attackers to execute arbitrary code because of java deserialization related to apache commons collections publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons collections commons collections rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree commons collections commons collections isminimumfixversionavailable true minimumfixversion commons collections commons collections isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the mulesoft mule community edition runtime engine before allows remote attackers to execute arbitrary code because of java deserialization related to apache commons collections vulnerabilityurl
| 0
|
464,278
| 13,309,437,124
|
IssuesEvent
|
2020-08-26 04:01:16
|
CHOMPStation2/CHOMPStation2
|
https://api.github.com/repos/CHOMPStation2/CHOMPStation2
|
reopened
|
Updating the Drone Fabricators used on map
|
Low Priority Map Edit
|
#### Brief overview of the feature
Replacing the old Drone Fabricators with the unify Variants.
/obj/machinery/drone_fabricator/unify
#### What you want to happen
Replacing the old Drone Fabricators with the unify Variants.
/obj/machinery/drone_fabricator/unify
|
1.0
|
Updating the Drone Fabricators used on map - #### Brief overview of the feature
Replacing the old Drone Fabricators with the unify Variants.
/obj/machinery/drone_fabricator/unify
#### What you want to happen
Replacing the old Drone Fabricators with the unify Variants.
/obj/machinery/drone_fabricator/unify
|
non_test
|
updating the drone fabricators used on map brief overview of the feature replacing the old drone fabricators with the unify variants obj machinery drone fabricator unify what you want to happen replacing the old drone fabricators with the unify variants obj machinery drone fabricator unify
| 0
|
150,723
| 11,982,516,125
|
IssuesEvent
|
2020-04-07 13:05:29
|
Coderockr/backstage
|
https://api.github.com/repos/Coderockr/backstage
|
opened
|
Biblioteca de componentes gráficos Nivo.Rocks
|
component frontend not tested yet react
|
Link: https://nivo.rocks/components
<img width="1265" alt="Screen Shot 2020-04-07 at 10 05 07" src="https://user-images.githubusercontent.com/2267327/78672613-49291980-78b7-11ea-8b26-9f43ecf6384c.png">
|
1.0
|
Biblioteca de componentes gráficos Nivo.Rocks - Link: https://nivo.rocks/components
<img width="1265" alt="Screen Shot 2020-04-07 at 10 05 07" src="https://user-images.githubusercontent.com/2267327/78672613-49291980-78b7-11ea-8b26-9f43ecf6384c.png">
|
test
|
biblioteca de componentes gráficos nivo rocks link img width alt screen shot at src
| 1
|
1,758
| 3,102,668,416
|
IssuesEvent
|
2015-08-31 01:55:08
|
ember-cli/ember-cli
|
https://api.github.com/repos/ember-cli/ember-cli
|
opened
|
extra IO during rebuilds
|
pending-dependency-fix performance
|
this manifests as:
- slower rebuilds
- slightly slower initial builds
- some watchman recrawls
upstream issue to track the primary effort https://github.com/broccolijs/broccoli/issues/278
|
True
|
extra IO during rebuilds - this manifests as:
- slower rebuilds
- slightly slower initial builds
- some watchman recrawls
upstream issue to track the primary effort https://github.com/broccolijs/broccoli/issues/278
|
non_test
|
extra io during rebuilds this manifests as slower rebuilds slightly slower initial builds some watchman recrawls upstream issue to track the primary effort
| 0
|
535,833
| 15,699,509,982
|
IssuesEvent
|
2021-03-26 08:35:45
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
opened
|
The stellar check_stellar_services makes an invalid request flooding the tokenservice logs
|
priority_major
|
### Description
An http get request is done on the transaction funding service:
https://github.com/threefoldtech/js-sdk/blob/0203efe381af8da887073c6bdcf0f1450453256f/jumpscale/clients/stellar/__init__.py#L50
The actor method fails because a required argument ( transaction is missing
Please do an options request.
### Traceback/Logs/Alerts
tftservices logs are flooded with:
```
File "/root/sandbox/var/downloaded_packages/threefoldfoundation_tft-stellar_transactionfunding_service_master/tft-stellar/ThreeBotPackages/transactionfunding_service/actors/transactionfunding_service.py", line 55, in fund_transaction
raise j.exceptions.Value(f"missing a required argument: 'transaction'")
│ └ <property object at 0x7fbe03584e50>
└ <jumpscale.loader.J object at 0x7fbe03587250>
jumpscale.core.exceptions.exceptions.Value: missing a required argument: 'transaction'
2021-03-26 09:32:02.437 | INFO | jumpscale.servers.gedis.server:_on_connection:269 - Executing method fund_transaction from actor transactionfunding_service_transactionfunding_service to client ('127.0.0.1', 35722)
127.0.0.1 - - [2021-03-26 09:32:02] "GET /transactionfunding_service/transactionfunding_service/fund_transaction HTTP/1.0" 400 378 0.015945
2021-03-26 09:32:02.442 | ERROR | jumpscale.servers.gedis.server:_execute:231 - error while executing <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>>
Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/gevent/baseserver.py", line 26, in _handle_and_close_when_done
return handle(*args_tuple)
│ └ (<gevent._socket3.socket object, fd=12, family=2, type=1, proto=0>, ('127.0.0.1', 35722))
└ <bound method GedisServer._on_connection of GedisServer(
instance_name='threebot',
host='127.0.0.1',
port=16000,
enab...
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/server.py", line 279, in _on_connection
result = self._execute(method, args, kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ []
│ │ │ └ <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at...
│ │ └ <function GedisServer._execute at 0x7fbe0246f040>
│ └ GedisServer(
│ instance_name='threebot',
│ host='127.0.0.1',
│ port=16000,
│ enable_system_actor=True,
│ run_async=True,
│ _a...
└ {'error': "missing a required argument: 'transaction'", 'error_type': 1}
> File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/server.py", line 228, in _execute
response["result"] = method(*args, **kwargs)
│ │ │ └ {}
│ │ └ []
│ └ <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at...
└ {}
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/baseactor.py", line 25, in wrapper
result = func(*bound.args, **bound.kwargs)
│ │ │ │ └ <property object at 0x7fbe03efd9f0>
│ │ │ └ <BoundArguments (self=<transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>)>
│ │ └ <property object at 0x7fbe03efd9a0>
│ └ <BoundArguments (self=<transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>)>
└ <function Transactionfunding_service.fund_transaction at 0x7fbdff0ff820>
File "/root/sandbox/var/downloaded_packages/threefoldfoundation_tft-stellar_transactionfunding_service_master/tft-stellar/ThreeBotPackages/transactionfunding_service/actors/transactionfunding_service.py", line 55, in fund_transaction
raise j.exceptions.Value(f"missing a required argument: 'transaction'")
│ └ <property object at 0x7fbe03584e50>
└ <jumpscale.loader.J object at 0x7fbe03587250>
jumpscale.core.exceptions.exceptions.Value: missing a required argument: 'transaction'
127.0.0.1 - - [2021-03-26 09:32:02] "GET /transactionfunding_service/transactionfunding_service/fund_transaction HTTP/1.0" 400 378 0.010578
1
```
|
1.0
|
The stellar check_stellar_services makes an invalid request flooding the tokenservice logs - ### Description
An http get request is done on the transaction funding service:
https://github.com/threefoldtech/js-sdk/blob/0203efe381af8da887073c6bdcf0f1450453256f/jumpscale/clients/stellar/__init__.py#L50
The actor method fails because a required argument ( transaction is missing
Please do an options request.
### Traceback/Logs/Alerts
tftservices logs are flooded with:
```
File "/root/sandbox/var/downloaded_packages/threefoldfoundation_tft-stellar_transactionfunding_service_master/tft-stellar/ThreeBotPackages/transactionfunding_service/actors/transactionfunding_service.py", line 55, in fund_transaction
raise j.exceptions.Value(f"missing a required argument: 'transaction'")
│ └ <property object at 0x7fbe03584e50>
└ <jumpscale.loader.J object at 0x7fbe03587250>
jumpscale.core.exceptions.exceptions.Value: missing a required argument: 'transaction'
2021-03-26 09:32:02.437 | INFO | jumpscale.servers.gedis.server:_on_connection:269 - Executing method fund_transaction from actor transactionfunding_service_transactionfunding_service to client ('127.0.0.1', 35722)
127.0.0.1 - - [2021-03-26 09:32:02] "GET /transactionfunding_service/transactionfunding_service/fund_transaction HTTP/1.0" 400 378 0.015945
2021-03-26 09:32:02.442 | ERROR | jumpscale.servers.gedis.server:_execute:231 - error while executing <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>>
Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/gevent/baseserver.py", line 26, in _handle_and_close_when_done
return handle(*args_tuple)
│ └ (<gevent._socket3.socket object, fd=12, family=2, type=1, proto=0>, ('127.0.0.1', 35722))
└ <bound method GedisServer._on_connection of GedisServer(
instance_name='threebot',
host='127.0.0.1',
port=16000,
enab...
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/server.py", line 279, in _on_connection
result = self._execute(method, args, kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ []
│ │ │ └ <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at...
│ │ └ <function GedisServer._execute at 0x7fbe0246f040>
│ └ GedisServer(
│ instance_name='threebot',
│ host='127.0.0.1',
│ port=16000,
│ enable_system_actor=True,
│ run_async=True,
│ _a...
└ {'error': "missing a required argument: 'transaction'", 'error_type': 1}
> File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/server.py", line 228, in _execute
response["result"] = method(*args, **kwargs)
│ │ │ └ {}
│ │ └ []
│ └ <bound method Transactionfunding_service.fund_transaction of <transactionfunding_service.Transactionfunding_service object at...
└ {}
File "/root/.cache/pypoetry/virtualenvs/js-sdk-N5daMEp6-py3.8/lib/python3.8/site-packages/jumpscale/servers/gedis/baseactor.py", line 25, in wrapper
result = func(*bound.args, **bound.kwargs)
│ │ │ │ └ <property object at 0x7fbe03efd9f0>
│ │ │ └ <BoundArguments (self=<transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>)>
│ │ └ <property object at 0x7fbe03efd9a0>
│ └ <BoundArguments (self=<transactionfunding_service.Transactionfunding_service object at 0x7fbdff103310>)>
└ <function Transactionfunding_service.fund_transaction at 0x7fbdff0ff820>
File "/root/sandbox/var/downloaded_packages/threefoldfoundation_tft-stellar_transactionfunding_service_master/tft-stellar/ThreeBotPackages/transactionfunding_service/actors/transactionfunding_service.py", line 55, in fund_transaction
raise j.exceptions.Value(f"missing a required argument: 'transaction'")
│ └ <property object at 0x7fbe03584e50>
└ <jumpscale.loader.J object at 0x7fbe03587250>
jumpscale.core.exceptions.exceptions.Value: missing a required argument: 'transaction'
127.0.0.1 - - [2021-03-26 09:32:02] "GET /transactionfunding_service/transactionfunding_service/fund_transaction HTTP/1.0" 400 378 0.010578
1
```
|
non_test
|
the stellar check stellar services makes an invalid request flooding the tokenservice logs description an http get request is done on the transaction funding service the actor method fails because a required argument transaction is missing please do an options request traceback logs alerts tftservices logs are flooded with file root sandbox var downloaded packages threefoldfoundation tft stellar transactionfunding service master tft stellar threebotpackages transactionfunding service actors transactionfunding service py line in fund transaction raise j exceptions value f missing a required argument transaction │ └ └ jumpscale core exceptions exceptions value missing a required argument transaction info jumpscale servers gedis server on connection executing method fund transaction from actor transactionfunding service transactionfunding service to client get transactionfunding service transactionfunding service fund transaction http error jumpscale servers gedis server execute error while executing traceback most recent call last file root cache pypoetry virtualenvs js sdk lib site packages gevent baseserver py line in handle and close when done return handle args tuple │ └ └ bound method gedisserver on connection of gedisserver instance name threebot host port enab file root cache pypoetry virtualenvs js sdk lib site packages jumpscale servers gedis server py line in on connection result self execute method args kwargs │ │ │ │ │ └ │ │ │ │ └ │ │ │ └ bound method transactionfunding service fund transaction of transactionfunding service transactionfunding service object at │ │ └ │ └ gedisserver │ instance name threebot │ host │ port │ enable system actor true │ run async true │ a └ error missing a required argument transaction error type file root cache pypoetry virtualenvs js sdk lib site packages jumpscale servers gedis server py line in execute response method args kwargs │ │ │ └ │ │ └ │ └ bound method transactionfunding service fund transaction of transactionfunding service transactionfunding service object at └ file root cache pypoetry virtualenvs js sdk lib site packages jumpscale servers gedis baseactor py line in wrapper result func bound args bound kwargs │ │ │ │ └ │ │ │ └ │ │ └ │ └ └ file root sandbox var downloaded packages threefoldfoundation tft stellar transactionfunding service master tft stellar threebotpackages transactionfunding service actors transactionfunding service py line in fund transaction raise j exceptions value f missing a required argument transaction │ └ └ jumpscale core exceptions exceptions value missing a required argument transaction get transactionfunding service transactionfunding service fund transaction http
| 0
|
293
| 2,732,218,935
|
IssuesEvent
|
2015-04-17 03:00:25
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Atlas provisioner fails when using vmware and virtualbox as provisioners
|
bug post-processor/atlas post-processor/vagrant
|
```shell
virtualbox-iso (vagrant-cloud): Box accessible and matches tag
==> virtualbox-iso (vagrant-cloud): Creating version: 0.1
==> virtualbox-iso (vagrant-cloud): Creating provider: virtualbox
==> virtualbox-iso (vagrant-cloud): Preparing upload of box: centos7_64virtualbox.box
virtualbox-iso (vagrant-cloud): Box upload prepared with token 31564a9c-b1fc-4b6e-a3f2-ea6fdcf2e56b
==> virtualbox-iso (vagrant-cloud): Uploading box: centos7_64virtualbox.box
virtualbox-iso (vagrant-cloud): Depending on your internet connection and the size of the box, this may take some time
vmware-iso (vagrant): Compressing: disk-s007.vmdk
vmware-iso (vagrant): Compressing: disk-s008.vmdk
virtualbox-iso (vagrant-cloud): Box succesfully uploaded
==> virtualbox-iso (vagrant-cloud): Verifying provider upload: virtualbox
virtualbox-iso (vagrant-cloud): Waiting for upload token match
virtualbox-iso (vagrant-cloud): Upload succesfully verified with token 31564a9c-b1fc-4b6e-a3f2-ea6fdcf2e56b
==> virtualbox-iso (vagrant-cloud): Releasing version: 0.1
virtualbox-iso (vagrant-cloud): Version successfully released and available
Build 'virtualbox-iso' finished.
vmware-iso (vagrant): Compressing: disk-s009.vmdk
vmware-iso (vagrant): Compressing: disk-s010.vmdk
vmware-iso (vagrant): Compressing: disk-s011.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
==> vmware-iso: Running post-processor: vagrant-cloud
==> vmware-iso (vagrant-cloud): Verifying box is accessible: lmayorga1980/centos7-puppet
vmware-iso (vagrant-cloud): Box accessible and matches tag
==> vmware-iso (vagrant-cloud): Creating version: 0.1
vmware-iso (vagrant-cloud): Version exists, skipping creation
==> vmware-iso (vagrant-cloud): Creating provider: vmware_desktop
==> vmware-iso (vagrant-cloud): Preparing upload of box: centos7_64vmware.box
vmware-iso (vagrant-cloud): Box upload prepared with token 0739db2a-390a-4c27-a57e-0f57a03f1366
==> vmware-iso (vagrant-cloud): Uploading box: centos7_64vmware.box
vmware-iso (vagrant-cloud): Depending on your internet connection and the size of the box, this may take some time
vmware-iso (vagrant-cloud): Box succesfully uploaded
==> vmware-iso (vagrant-cloud): Verifying provider upload: vmware_desktop
vmware-iso (vagrant-cloud): Waiting for upload token match
vmware-iso (vagrant-cloud): Upload succesfully verified with token 0739db2a-390a-4c27-a57e-0f57a03f1366
==> vmware-iso (vagrant-cloud): Releasing version: 0.1
==> vmware-iso (vagrant-cloud): Cleaning up provider
vmware-iso (vagrant-cloud): Deleting provider: vmware_desktop
vmware-iso (vagrant-cloud): Version was not created or previously existed, not deleting
Build 'vmware-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error releasing version: base Version has already been released
```
|
2.0
|
Atlas provisioner fails when using vmware and virtualbox as provisioners - ```shell
virtualbox-iso (vagrant-cloud): Box accessible and matches tag
==> virtualbox-iso (vagrant-cloud): Creating version: 0.1
==> virtualbox-iso (vagrant-cloud): Creating provider: virtualbox
==> virtualbox-iso (vagrant-cloud): Preparing upload of box: centos7_64virtualbox.box
virtualbox-iso (vagrant-cloud): Box upload prepared with token 31564a9c-b1fc-4b6e-a3f2-ea6fdcf2e56b
==> virtualbox-iso (vagrant-cloud): Uploading box: centos7_64virtualbox.box
virtualbox-iso (vagrant-cloud): Depending on your internet connection and the size of the box, this may take some time
vmware-iso (vagrant): Compressing: disk-s007.vmdk
vmware-iso (vagrant): Compressing: disk-s008.vmdk
virtualbox-iso (vagrant-cloud): Box succesfully uploaded
==> virtualbox-iso (vagrant-cloud): Verifying provider upload: virtualbox
virtualbox-iso (vagrant-cloud): Waiting for upload token match
virtualbox-iso (vagrant-cloud): Upload succesfully verified with token 31564a9c-b1fc-4b6e-a3f2-ea6fdcf2e56b
==> virtualbox-iso (vagrant-cloud): Releasing version: 0.1
virtualbox-iso (vagrant-cloud): Version successfully released and available
Build 'virtualbox-iso' finished.
vmware-iso (vagrant): Compressing: disk-s009.vmdk
vmware-iso (vagrant): Compressing: disk-s010.vmdk
vmware-iso (vagrant): Compressing: disk-s011.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
==> vmware-iso: Running post-processor: vagrant-cloud
==> vmware-iso (vagrant-cloud): Verifying box is accessible: lmayorga1980/centos7-puppet
vmware-iso (vagrant-cloud): Box accessible and matches tag
==> vmware-iso (vagrant-cloud): Creating version: 0.1
vmware-iso (vagrant-cloud): Version exists, skipping creation
==> vmware-iso (vagrant-cloud): Creating provider: vmware_desktop
==> vmware-iso (vagrant-cloud): Preparing upload of box: centos7_64vmware.box
vmware-iso (vagrant-cloud): Box upload prepared with token 0739db2a-390a-4c27-a57e-0f57a03f1366
==> vmware-iso (vagrant-cloud): Uploading box: centos7_64vmware.box
vmware-iso (vagrant-cloud): Depending on your internet connection and the size of the box, this may take some time
vmware-iso (vagrant-cloud): Box succesfully uploaded
==> vmware-iso (vagrant-cloud): Verifying provider upload: vmware_desktop
vmware-iso (vagrant-cloud): Waiting for upload token match
vmware-iso (vagrant-cloud): Upload succesfully verified with token 0739db2a-390a-4c27-a57e-0f57a03f1366
==> vmware-iso (vagrant-cloud): Releasing version: 0.1
==> vmware-iso (vagrant-cloud): Cleaning up provider
vmware-iso (vagrant-cloud): Deleting provider: vmware_desktop
vmware-iso (vagrant-cloud): Version was not created or previously existed, not deleting
Build 'vmware-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error releasing version: base Version has already been released
```
|
non_test
|
atlas provisioner fails when using vmware and virtualbox as provisioners shell virtualbox iso vagrant cloud box accessible and matches tag virtualbox iso vagrant cloud creating version virtualbox iso vagrant cloud creating provider virtualbox virtualbox iso vagrant cloud preparing upload of box box virtualbox iso vagrant cloud box upload prepared with token virtualbox iso vagrant cloud uploading box box virtualbox iso vagrant cloud depending on your internet connection and the size of the box this may take some time vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk virtualbox iso vagrant cloud box succesfully uploaded virtualbox iso vagrant cloud verifying provider upload virtualbox virtualbox iso vagrant cloud waiting for upload token match virtualbox iso vagrant cloud upload succesfully verified with token virtualbox iso vagrant cloud releasing version virtualbox iso vagrant cloud version successfully released and available build virtualbox iso finished vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing metadata json vmware iso running post processor vagrant cloud vmware iso vagrant cloud verifying box is accessible puppet vmware iso vagrant cloud box accessible and matches tag vmware iso vagrant cloud creating version vmware iso vagrant cloud version exists skipping creation vmware iso vagrant cloud creating provider vmware desktop vmware iso vagrant cloud preparing upload of box box vmware iso vagrant cloud box upload prepared with token vmware iso vagrant cloud uploading box box vmware iso vagrant cloud depending on your internet connection and the size of the box this may take some time vmware iso vagrant cloud box succesfully uploaded vmware iso vagrant cloud verifying provider upload vmware desktop vmware iso vagrant cloud waiting for upload token match vmware iso vagrant cloud upload succesfully verified with token vmware iso vagrant cloud releasing version vmware iso vagrant cloud cleaning up provider vmware iso vagrant cloud deleting provider vmware desktop vmware iso vagrant cloud version was not created or previously existed not deleting build vmware iso errored error s occurred post processor failed error releasing version base version has already been released
| 0
|
127,915
| 10,499,996,926
|
IssuesEvent
|
2019-09-26 09:33:08
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll fails on 7.4
|
:ml >test-failure
|
org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll failed on 7.4:
Logs: https://gradle-enterprise.elastic.co/s/wcgoihe5e5cnw/console-log?task=:x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner
Cannot reproduce locally with:
```
./gradlew ':x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner' --tests "org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll" \
-Dtests.seed=2E7756CD3B416807 \
-Dtests.security.manager=true \
-Dtests.locale=sah \
-Dtests.timezone=Atlantic/Faroe \
-Dcompiler.java=12 \
-Druntime.java=11
```
Stacktrace:
```
org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT > testStopOutlierDetectionWithEnoughDocumentsToScroll FAILED
--
java.lang.AssertionError:
Expected: <0L>
but: was <1L>
at __randomizedtesting.SeedInfo.seed([2E7756CD3B416807:BE86A3E9BC1CEF3D]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.cleanUpAnalytics(MlNativeDataFrameAnalyticsIntegTestCase.java:54)
at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.cleanUpResources(MlNativeDataFrameAnalyticsIntegTestCase.java:45)
at org.elasticsearch.xpack.ml.integration.MlNativeIntegTestCase.cleanUp(MlNativeIntegTestCase.java:98)
at org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.cleanup(RunDataFrameAnalyticsIT.java:46)
```
Seems the same as: #46705 but the fix was already backported to 7.4
|
1.0
|
[CI] org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll fails on 7.4 - org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll failed on 7.4:
Logs: https://gradle-enterprise.elastic.co/s/wcgoihe5e5cnw/console-log?task=:x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner
Cannot reproduce locally with:
```
./gradlew ':x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner' --tests "org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.testStopOutlierDetectionWithEnoughDocumentsToScroll" \
-Dtests.seed=2E7756CD3B416807 \
-Dtests.security.manager=true \
-Dtests.locale=sah \
-Dtests.timezone=Atlantic/Faroe \
-Dcompiler.java=12 \
-Druntime.java=11
```
Stacktrace:
```
org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT > testStopOutlierDetectionWithEnoughDocumentsToScroll FAILED
--
java.lang.AssertionError:
Expected: <0L>
but: was <1L>
at __randomizedtesting.SeedInfo.seed([2E7756CD3B416807:BE86A3E9BC1CEF3D]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.cleanUpAnalytics(MlNativeDataFrameAnalyticsIntegTestCase.java:54)
at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.cleanUpResources(MlNativeDataFrameAnalyticsIntegTestCase.java:45)
at org.elasticsearch.xpack.ml.integration.MlNativeIntegTestCase.cleanUp(MlNativeIntegTestCase.java:98)
at org.elasticsearch.xpack.ml.integration.RunDataFrameAnalyticsIT.cleanup(RunDataFrameAnalyticsIT.java:46)
```
Seems the same as: #46705 but the fix was already backported to 7.4
|
test
|
org elasticsearch xpack ml integration rundataframeanalyticsit teststopoutlierdetectionwithenoughdocumentstoscroll fails on org elasticsearch xpack ml integration rundataframeanalyticsit teststopoutlierdetectionwithenoughdocumentstoscroll failed on logs cannot reproduce locally with gradlew x pack plugin ml qa native multi node tests integtestrunner tests org elasticsearch xpack ml integration rundataframeanalyticsit teststopoutlierdetectionwithenoughdocumentstoscroll dtests seed dtests security manager true dtests locale sah dtests timezone atlantic faroe dcompiler java druntime java stacktrace org elasticsearch xpack ml integration rundataframeanalyticsit teststopoutlierdetectionwithenoughdocumentstoscroll failed java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch xpack ml integration mlnativedataframeanalyticsintegtestcase cleanupanalytics mlnativedataframeanalyticsintegtestcase java at org elasticsearch xpack ml integration mlnativedataframeanalyticsintegtestcase cleanupresources mlnativedataframeanalyticsintegtestcase java at org elasticsearch xpack ml integration mlnativeintegtestcase cleanup mlnativeintegtestcase java at org elasticsearch xpack ml integration rundataframeanalyticsit cleanup rundataframeanalyticsit java seems the same as but the fix was already backported to
| 1
|
7,887
| 2,938,965,069
|
IssuesEvent
|
2015-07-01 14:02:18
|
bem-incubator/bem-deps
|
https://api.github.com/repos/bem-incubator/bem-deps
|
closed
|
Resolve spec -> unordered dependencies recommended ordering
|
proj:deps ready test
|
Section describes recommended ordering for dependencies which ordering was not specified.
|
1.0
|
Resolve spec -> unordered dependencies recommended ordering - Section describes recommended ordering for dependencies which ordering was not specified.
|
test
|
resolve spec unordered dependencies recommended ordering section describes recommended ordering for dependencies which ordering was not specified
| 1
|
37,361
| 5,114,215,536
|
IssuesEvent
|
2017-01-06 17:44:28
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
opened
|
Fail to resolve kubernetes.default [kvm]
|
driver/kvm tests/integration
|
```
Temporary Error: Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
```
https://storage.googleapis.com/minikube-builds/logs/965/Linux-KVM.txt
|
1.0
|
Fail to resolve kubernetes.default [kvm] - ```
Temporary Error: Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Temporary Error: Error running command. Error exit status 1. Output: nslookup: can't resolve 'kubernetes.default'
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
```
https://storage.googleapis.com/minikube-builds/logs/965/Linux-KVM.txt
|
test
|
fail to resolve kubernetes default temporary error temporary error error running command error exit status output nslookup can t resolve kubernetes default server address kube dns kube system svc cluster local temporary error error running command error exit status output nslookup can t resolve kubernetes default server address kube dns kube system svc cluster local temporary error error running command error exit status output nslookup can t resolve kubernetes default server address kube dns kube system svc cluster local
| 1
|
25,697
| 4,166,000,448
|
IssuesEvent
|
2016-06-19 21:35:07
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
Initial replication gets stuck on Tecno
|
4 - Acceptance Testing Bug
|
@garethbowen here is a summary of my findings. Let me know if I can investigate anything more thoroughly.
Everything looks great on Chrome, all data loads within a minute or two. On Tecno, here is where I get stuck:

On the phone, initial replication status is in progress and I have a spinner on the History page, the + sign is not illuminated, even though I know the forms downloaded because I saw the requests come through on dev tools.
I tested using one of the Iganga users so that there would be some data in there to download. Could it be that my phone is slow?
Eventually it finished up:


The + sign illuminated after this, but I get "No reports found" on the History page. The Contacts tab has a spinner as well - I gave up after about 6 minutes of waiting for it to load.
Next, I reloaded. It still took over a minute for initial replication to complete:

Contacts loaded in about 20 seconds, History in 5-10.
Compare this to logging in as the same user on Chrome - initial sync completed in 12.58 seconds, History page shows reports, and Contacts tab loads within 5 seconds. On reload, initial sync completes in 3.5 seconds.
We've got an order of magnitude difference between Chrome and Tecno, which is worth investigating, IMHO.
|
1.0
|
Initial replication gets stuck on Tecno - @garethbowen here is a summary of my findings. Let me know if I can investigate anything more thoroughly.
Everything looks great on Chrome, all data loads within a minute or two. On Tecno, here is where I get stuck:

On the phone, initial replication status is in progress and I have a spinner on the History page, the + sign is not illuminated, even though I know the forms downloaded because I saw the requests come through on dev tools.
I tested using one of the Iganga users so that there would be some data in there to download. Could it be that my phone is slow?
Eventually it finished up:


The + sign illuminated after this, but I get "No reports found" on the History page. The Contacts tab has a spinner as well - I gave up after about 6 minutes of waiting for it to load.
Next, I reloaded. It still took over a minute for initial replication to complete:

Contacts loaded in about 20 seconds, History in 5-10.
Compare this to logging in as the same user on Chrome - initial sync completed in 12.58 seconds, History page shows reports, and Contacts tab loads within 5 seconds. On reload, initial sync completes in 3.5 seconds.
We've got an order of magnitude difference between Chrome and Tecno, which is worth investigating, IMHO.
|
test
|
initial replication gets stuck on tecno garethbowen here is a summary of my findings let me know if i can investigate anything more thoroughly everything looks great on chrome all data loads within a minute or two on tecno here is where i get stuck on the phone initial replication status is in progress and i have a spinner on the history page the sign is not illuminated even though i know the forms downloaded because i saw the requests come through on dev tools i tested using one of the iganga users so that there would be some data in there to download could it be that my phone is slow eventually it finished up the sign illuminated after this but i get no reports found on the history page the contacts tab has a spinner as well i gave up after about minutes of waiting for it to load next i reloaded it still took over a minute for initial replication to complete contacts loaded in about seconds history in compare this to logging in as the same user on chrome initial sync completed in seconds history page shows reports and contacts tab loads within seconds on reload initial sync completes in seconds we ve got an order of magnitude difference between chrome and tecno which is worth investigating imho
| 1
|
30,233
| 11,801,276,456
|
IssuesEvent
|
2020-03-18 19:07:49
|
jgeraigery/blueocean-environments
|
https://api.github.com/repos/jgeraigery/blueocean-environments
|
opened
|
WS-2019-0063 (High) detected in js-yaml-3.12.0.tgz
|
security vulnerability
|
## WS-2019-0063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>js-yaml-3.12.0.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.0.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/blueocean-environments/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- js-extensions-0.0.44.tgz (Root Library)
- :x: **js-yaml-3.12.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/blueocean-environments/commit/906df1e2c2b1353a7f809ef51960f8980d6cec13">906df1e2c2b1353a7f809ef51960f8980d6cec13</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://github.com/nodeca/js-yaml/pull/480>WS-2019-0063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>8.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/813">https://www.npmjs.com/advisories/813</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: 3.13.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"js-yaml","packageVersion":"3.12.0","isTransitiveDependency":true,"dependencyTree":"@jenkins-cd/js-extensions:0.0.44;js-yaml:3.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.13.1"}],"vulnerabilityIdentifier":"WS-2019-0063","vulnerabilityDetails":"Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.","vulnerabilityUrl":"https://github.com/nodeca/js-yaml/pull/480","cvss2Severity":"high","cvss2Score":"8.0","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0063 (High) detected in js-yaml-3.12.0.tgz - ## WS-2019-0063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>js-yaml-3.12.0.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.0.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/blueocean-environments/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- js-extensions-0.0.44.tgz (Root Library)
- :x: **js-yaml-3.12.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/blueocean-environments/commit/906df1e2c2b1353a7f809ef51960f8980d6cec13">906df1e2c2b1353a7f809ef51960f8980d6cec13</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://github.com/nodeca/js-yaml/pull/480>WS-2019-0063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>8.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/813">https://www.npmjs.com/advisories/813</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: 3.13.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"js-yaml","packageVersion":"3.12.0","isTransitiveDependency":true,"dependencyTree":"@jenkins-cd/js-extensions:0.0.44;js-yaml:3.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.13.1"}],"vulnerabilityIdentifier":"WS-2019-0063","vulnerabilityDetails":"Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.","vulnerabilityUrl":"https://github.com/nodeca/js-yaml/pull/480","cvss2Severity":"high","cvss2Score":"8.0","extraData":{}}</REMEDIATE> -->
|
non_test
|
ws high detected in js yaml tgz ws high severity vulnerability vulnerable library js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm blueocean environments package json path to vulnerable library tmp ws scm blueocean environments node modules js yaml package json dependency hierarchy js extensions tgz root library x js yaml tgz vulnerable library found in head commit a href vulnerability details js yaml prior to are vulnerable to code injection the load function may execute arbitrary code injected through a malicious yaml file publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails js yaml prior to are vulnerable to code injection the load function may execute arbitrary code injected through a malicious yaml file vulnerabilityurl
| 0
|
43,331
| 11,204,733,344
|
IssuesEvent
|
2020-01-05 08:45:02
|
qorelanguage/qore
|
https://api.github.com/repos/qorelanguage/qore
|
closed
|
fix doxygen doc build errors
|
bug build documentation fixed module
|
newer versions of doxygen (1.8.16+) require the following changes:
* if any files appear in `INPUT`, then directories are not scanned; all source files must appear in `INPUT`
* namespaces must be nested instead of given like `namespace Parent::Child { ... }`
* groups must be delineated with group begin and end markers (`@{` and `@}`) in block comments; line comments prefixing these markers are now ignored by doxygen
the last one should be considered a bug in doxygen and could be fixed, but to ensure compatibility with previous, current, and future versions of doxygen, `qpp` and any build scripts need to be updated to deal with the above now
|
1.0
|
fix doxygen doc build errors - newer versions of doxygen (1.8.16+) require the following changes:
* if any files appear in `INPUT`, then directories are not scanned; all source files must appear in `INPUT`
* namespaces must be nested instead of given like `namespace Parent::Child { ... }`
* groups must be delineated with group begin and end markers (`@{` and `@}`) in block comments; line comments prefixing these markers are now ignored by doxygen
the last one should be considered a bug in doxygen and could be fixed, but to ensure compatibility with previous, current, and future versions of doxygen, `qpp` and any build scripts need to be updated to deal with the above now
|
non_test
|
fix doxygen doc build errors newer versions of doxygen require the following changes if any files appear in input then directories are not scanned all source files must appear in input namespaces must be nested instead of given like namespace parent child groups must be delineated with group begin and end markers and in block comments line comments prefixing these markers are now ignored by doxygen the last one should be considered a bug in doxygen and could be fixed but to ensure compatibility with previous current and future versions of doxygen qpp and any build scripts need to be updated to deal with the above now
| 0
|
134,860
| 18,512,722,571
|
IssuesEvent
|
2021-10-20 06:27:55
|
mgh3326/making_page
|
https://api.github.com/repos/mgh3326/making_page
|
opened
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: making_page/package.json</p>
<p>Path to vulnerable library: making_page/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.9.0.tgz (Root Library)
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/making_page/commit/737d40c10caf82472527a77d6871d0843a51a649">737d40c10caf82472527a77d6871d0843a51a649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: making_page/package.json</p>
<p>Path to vulnerable library: making_page/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.9.0.tgz (Root Library)
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/making_page/commit/737d40c10caf82472527a77d6871d0843a51a649">737d40c10caf82472527a77d6871d0843a51a649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file making page package json path to vulnerable library making page node modules url parse package json dependency hierarchy webpack dev server tgz root library sockjs client tgz x url parse tgz vulnerable library found in head commit a href vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
| 0
|
830,101
| 31,989,016,471
|
IssuesEvent
|
2023-09-21 03:20:46
|
priyankarpal/ProjectsHut
|
https://api.github.com/repos/priyankarpal/ProjectsHut
|
closed
|
chore: project addition by surya-mu
|
good first issue 🟩 priority: low 🏁 status: ready for dev projects addition
|
### Add a new project to the list
I'd like to add my Project which is a Personal Portfolio Website showcasing all my talents and other projects, and also serves as a template for others.
### Record
- [X] I have checked the existing [issues](https://github.com/priyankarpal/ProjectsHut/issues)
- [X] I have read the [Contributing Guidelines](https://github.com/priyankarpal/ProjectsHut/blob/main/contributing.md)
- [X] I agree to follow this project's [Code of Conduct](https://github.com/priyankarpal/ProjectsHut/blob/main/CODE_OF_CONDUCT.md)
- [X] I want to work on this issue
|
1.0
|
chore: project addition by surya-mu - ### Add a new project to the list
I'd like to add my Project which is a Personal Portfolio Website showcasing all my talents and other projects, and also serves as a template for others.
### Record
- [X] I have checked the existing [issues](https://github.com/priyankarpal/ProjectsHut/issues)
- [X] I have read the [Contributing Guidelines](https://github.com/priyankarpal/ProjectsHut/blob/main/contributing.md)
- [X] I agree to follow this project's [Code of Conduct](https://github.com/priyankarpal/ProjectsHut/blob/main/CODE_OF_CONDUCT.md)
- [X] I want to work on this issue
|
non_test
|
chore project addition by surya mu add a new project to the list i d like to add my project which is a personal portfolio website showcasing all my talents and other projects and also serves as a template for others record i have checked the existing i have read the i agree to follow this project s i want to work on this issue
| 0
|
56,522
| 8,080,881,935
|
IssuesEvent
|
2018-08-08 00:05:15
|
neuroscout/neuroscout
|
https://api.github.com/repos/neuroscout/neuroscout
|
opened
|
Gather feedback on predictor annotation names
|
documentation enhancement help wanted
|
I've annotated most Extracted Features and Predictors. It would be great to get feedback on how useful the annotated descriptions are, and what other annotations would be useful to have.
|
1.0
|
Gather feedback on predictor annotation names - I've annotated most Extracted Features and Predictors. It would be great to get feedback on how useful the annotated descriptions are, and what other annotations would be useful to have.
|
non_test
|
gather feedback on predictor annotation names i ve annotated most extracted features and predictors it would be great to get feedback on how useful the annotated descriptions are and what other annotations would be useful to have
| 0
|
90,209
| 26,007,538,105
|
IssuesEvent
|
2022-12-20 21:00:24
|
vipm-io/vipm-desktop-issues
|
https://api.github.com/repos/vipm-io/vipm-desktop-issues
|
closed
|
VI Package build fails if it includes an LVLIB or LVCLASS with a missing Friend VI
|
bug Package Builder
|
## Overview
When attempting to build a VI package with an lvlib or lvlcass that declares a friend that is not preseent on disk in the expected location, thee build will fail with an Error 7 stating that there is a missing VI.
```
VIPM API_vipm_api.lvlib:Parse Build Return Message_vipm_api.vi<ERR>
Code:: 7
Source:: 7882C6B20F4F1734FAE5DD4402F323AA<ERR>
The following source VIs or Libraries are missing.
Please correct this problem before rebuilding.
SSL.vi
The following source VIs or Libraries are the callers of missing files
Lonely.lvlib
```
VIPM Versions Affected: 2021, 2022
OS'es Affected: All
Status: Not yet fixed
Test Case: [tests/Issue\_000009\_build\_package\_with\_missing\_friend](https://github.com/vipm-io/vipm-desktop-issues/tree/main/tests/Issue_000009_build_package_with_missing_friend)
Workaround: See below
## Error Symptoms
### Building with VI Package Builder UI
A dialog is shown stating that there is a missing dependency. The missing Friend VI.


### Building with VIPM API
An Error 7 occurs stating that there is a missing VI.

## Root Issue
The issue is that the VI Package source includes an lvlib or lvclass that declares a friend that does not actually exist on disk. This does not cause problems for the lvlib, since it's more of a rule that applies to allowing the friend to call community-scoped members of the lvlib that declares the friend. However, VIPM is considering this friend a dependency, when it should not.

## Possible Workaround
Prior to building the VI package, put a dummy VI (blank VI) in the location on disk where the Friend VI is expected to be found. This can be done programmatically using using pre-build and post-build custom actions (before and after the package build) in the VI Package Build spec.
|
1.0
|
VI Package build fails if it includes an LVLIB or LVCLASS with a missing Friend VI - ## Overview
When attempting to build a VI package with an lvlib or lvlcass that declares a friend that is not preseent on disk in the expected location, thee build will fail with an Error 7 stating that there is a missing VI.
```
VIPM API_vipm_api.lvlib:Parse Build Return Message_vipm_api.vi<ERR>
Code:: 7
Source:: 7882C6B20F4F1734FAE5DD4402F323AA<ERR>
The following source VIs or Libraries are missing.
Please correct this problem before rebuilding.
SSL.vi
The following source VIs or Libraries are the callers of missing files
Lonely.lvlib
```
VIPM Versions Affected: 2021, 2022
OS'es Affected: All
Status: Not yet fixed
Test Case: [tests/Issue\_000009\_build\_package\_with\_missing\_friend](https://github.com/vipm-io/vipm-desktop-issues/tree/main/tests/Issue_000009_build_package_with_missing_friend)
Workaround: See below
## Error Symptoms
### Building with VI Package Builder UI
A dialog is shown stating that there is a missing dependency. The missing Friend VI.


### Building with VIPM API
An Error 7 occurs stating that there is a missing VI.

## Root Issue
The issue is that the VI Package source includes an lvlib or lvclass that declares a friend that does not actually exist on disk. This does not cause problems for the lvlib, since it's more of a rule that applies to allowing the friend to call community-scoped members of the lvlib that declares the friend. However, VIPM is considering this friend a dependency, when it should not.

## Possible Workaround
Prior to building the VI package, put a dummy VI (blank VI) in the location on disk where the Friend VI is expected to be found. This can be done programmatically using using pre-build and post-build custom actions (before and after the package build) in the VI Package Build spec.
|
non_test
|
vi package build fails if it includes an lvlib or lvclass with a missing friend vi overview when attempting to build a vi package with an lvlib or lvlcass that declares a friend that is not preseent on disk in the expected location thee build will fail with an error stating that there is a missing vi vipm api vipm api lvlib parse build return message vipm api vi code source the following source vis or libraries are missing please correct this problem before rebuilding ssl vi the following source vis or libraries are the callers of missing files lonely lvlib vipm versions affected os es affected all status not yet fixed test case workaround see below error symptoms building with vi package builder ui a dialog is shown stating that there is a missing dependency the missing friend vi building with vipm api an error occurs stating that there is a missing vi root issue the issue is that the vi package source includes an lvlib or lvclass that declares a friend that does not actually exist on disk this does not cause problems for the lvlib since it s more of a rule that applies to allowing the friend to call community scoped members of the lvlib that declares the friend however vipm is considering this friend a dependency when it should not possible workaround prior to building the vi package put a dummy vi blank vi in the location on disk where the friend vi is expected to be found this can be done programmatically using using pre build and post build custom actions before and after the package build in the vi package build spec
| 0
|
200,332
| 15,099,815,570
|
IssuesEvent
|
2021-02-08 03:44:12
|
the-canonizer/canonizer.2.0
|
https://api.github.com/repos/the-canonizer/canonizer.2.0
|
closed
|
Hyperlink on Browsing as: Guest
|
20th Release Fixed ready to test
|
Do we need hyperlink on Browsing as: Guest, because as of now I can see the hyperlink but it is not redirecting to anywhere
|
1.0
|
Hyperlink on Browsing as: Guest - Do we need hyperlink on Browsing as: Guest, because as of now I can see the hyperlink but it is not redirecting to anywhere
|
test
|
hyperlink on browsing as guest do we need hyperlink on browsing as guest because as of now i can see the hyperlink but it is not redirecting to anywhere
| 1
|
124,292
| 17,772,522,931
|
IssuesEvent
|
2021-08-30 15:09:36
|
kapseliboi/Vue2Leaflet
|
https://api.github.com/repos/kapseliboi/Vue2Leaflet
|
opened
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: Vue2Leaflet/package.json</p>
<p>Path to vulnerable library: Vue2Leaflet/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.7.1.tgz (Root Library)
- core-1.7.1.tgz
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Vue2Leaflet/commit/53817e18041a05f9f6ac4b02e9520262cf910bcf">53817e18041a05f9f6ac4b02e9520262cf910bcf</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: Vue2Leaflet/package.json</p>
<p>Path to vulnerable library: Vue2Leaflet/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.7.1.tgz (Root Library)
- core-1.7.1.tgz
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Vue2Leaflet/commit/53817e18041a05f9f6ac4b02e9520262cf910bcf">53817e18041a05f9f6ac4b02e9520262cf910bcf</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file package json path to vulnerable library node modules url parse package json dependency hierarchy vuepress tgz root library core tgz webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
| 0
|
59,279
| 6,643,517,644
|
IssuesEvent
|
2017-09-27 11:39:52
|
konstantin-ciklum/test-waffle-repo-testing
|
https://api.github.com/repos/konstantin-ciklum/test-waffle-repo-testing
|
closed
|
Testing Task for "Story 2 - Continuing"
|
Continuing #28 In Test Testing
|
**Reference to Story:**
konstantin-ciklum/test-waffle-repo-stories#28
**Reference to Test cases:**
_Link to TestRail_
**Required environmental need for start testing:**
- Readiness of the dependent stories (e.g. to
start testing of the Story #1 we should already
have Story #2 and Story #5 been
implemented)
- Required devices
- Environment
- Tools
- etc.
**Notes:**
If some important or FYI notes should be stated
|
2.0
|
Testing Task for "Story 2 - Continuing" - **Reference to Story:**
konstantin-ciklum/test-waffle-repo-stories#28
**Reference to Test cases:**
_Link to TestRail_
**Required environmental need for start testing:**
- Readiness of the dependent stories (e.g. to
start testing of the Story #1 we should already
have Story #2 and Story #5 been
implemented)
- Required devices
- Environment
- Tools
- etc.
**Notes:**
If some important or FYI notes should be stated
|
test
|
testing task for story continuing reference to story konstantin ciklum test waffle repo stories reference to test cases link to testrail required environmental need for start testing readiness of the dependent stories e g to start testing of the story we should already have story and story been implemented required devices environment tools etc notes if some important or fyi notes should be stated
| 1
|
148,979
| 11,873,758,769
|
IssuesEvent
|
2020-03-26 17:49:08
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
none: TestVersionUpgrade 'exit 0' : driver does not support ssh command
|
kind/failing-test priority/important-soon
|
as seen in Two PRs:
https://storage.googleapis.com/minikube-builds/logs/7182/3c37556/none_Linux.html#fail_TestVersionUpgrade
and
https://storage.googleapis.com/minikube-builds/logs/7173/aec6fdc/none_Linux.html#fail_TestVersionUpgrade
|
1.0
|
none: TestVersionUpgrade 'exit 0' : driver does not support ssh command - as seen in Two PRs:
https://storage.googleapis.com/minikube-builds/logs/7182/3c37556/none_Linux.html#fail_TestVersionUpgrade
and
https://storage.googleapis.com/minikube-builds/logs/7173/aec6fdc/none_Linux.html#fail_TestVersionUpgrade
|
test
|
none testversionupgrade exit driver does not support ssh command as seen in two prs and
| 1
|
65,982
| 6,980,909,797
|
IssuesEvent
|
2017-12-13 04:49:05
|
bwsw/cs-entities
|
https://api.github.com/repos/bwsw/cs-entities
|
closed
|
Write integration tests to test DAO for user
|
integration test
|
1. Use a specific subclass of Request to create a request to get a non-existing entity.
2. Request the non-existing entity from 1st step through DAO (the empty list should be returned).
3. Use a specific subclass of Request to create a user.
4. Execute the request from the previous step through DAO.
5. Request the user through DAO (need to check that fields specified with the parameters are equal to fields of received entity).
6. Create one more user (repeat steps 3, 4).
7. Request a list of users through DAO (need to check that fields specified with the parameters are equal to fields of received entities).
|
1.0
|
Write integration tests to test DAO for user - 1. Use a specific subclass of Request to create a request to get a non-existing entity.
2. Request the non-existing entity from 1st step through DAO (the empty list should be returned).
3. Use a specific subclass of Request to create a user.
4. Execute the request from the previous step through DAO.
5. Request the user through DAO (need to check that fields specified with the parameters are equal to fields of received entity).
6. Create one more user (repeat steps 3, 4).
7. Request a list of users through DAO (need to check that fields specified with the parameters are equal to fields of received entities).
|
test
|
write integration tests to test dao for user use a specific subclass of request to create a request to get a non existing entity request the non existing entity from step through dao the empty list should be returned use a specific subclass of request to create a user execute the request from the previous step through dao request the user through dao need to check that fields specified with the parameters are equal to fields of received entity create one more user repeat steps request a list of users through dao need to check that fields specified with the parameters are equal to fields of received entities
| 1
|
73,197
| 31,930,109,060
|
IssuesEvent
|
2023-09-19 06:46:05
|
GovernIB/dir3caib
|
https://api.github.com/repos/GovernIB/dir3caib
|
closed
|
Descarga y sincronización completa
|
Tipus:BBDD Tipus:Refactoritzacio Lloc:General Lloc:Core Lloc:EJB Lloc:WebServices Estimació: EPIC
|
La descarga y sincronización completa del directorio presenta varios problemas, entre ellos los timeout de descarga de ficheros por parte de los WS a tal efecto y los continuos problemas de datos incompletos.
Es por ello, que se propone dividir la descarga por niveles de administración y realizar la sincronización por unidades, siguiendo con el resto de elementos.
También se propone rehacer el método "restaurarUnidadesOficinas" para realizar una descarga correcta primero, y después eliminar todo el contenido, para finalmente sincronizar los nuevos datos obtenidos.
|
1.0
|
Descarga y sincronización completa - La descarga y sincronización completa del directorio presenta varios problemas, entre ellos los timeout de descarga de ficheros por parte de los WS a tal efecto y los continuos problemas de datos incompletos.
Es por ello, que se propone dividir la descarga por niveles de administración y realizar la sincronización por unidades, siguiendo con el resto de elementos.
También se propone rehacer el método "restaurarUnidadesOficinas" para realizar una descarga correcta primero, y después eliminar todo el contenido, para finalmente sincronizar los nuevos datos obtenidos.
|
non_test
|
descarga y sincronización completa la descarga y sincronización completa del directorio presenta varios problemas entre ellos los timeout de descarga de ficheros por parte de los ws a tal efecto y los continuos problemas de datos incompletos es por ello que se propone dividir la descarga por niveles de administración y realizar la sincronización por unidades siguiendo con el resto de elementos también se propone rehacer el método restaurarunidadesoficinas para realizar una descarga correcta primero y después eliminar todo el contenido para finalmente sincronizar los nuevos datos obtenidos
| 0
|
173,594
| 6,528,254,839
|
IssuesEvent
|
2017-08-30 06:37:42
|
coloredcow/WP-RSVP
|
https://api.github.com/repos/coloredcow/WP-RSVP
|
closed
|
RSVP update available.
|
priority : high status : ready for review type : bug
|
Looks like the name's clashing with another existing plugin. Also, if we're going to submit this to WP, we'll definitely have to fix this name clash.
|
1.0
|
RSVP update available. - Looks like the name's clashing with another existing plugin. Also, if we're going to submit this to WP, we'll definitely have to fix this name clash.
|
non_test
|
rsvp update available looks like the name s clashing with another existing plugin also if we re going to submit this to wp we ll definitely have to fix this name clash
| 0
|
117,402
| 15,096,421,678
|
IssuesEvent
|
2021-02-07 15:01:36
|
ant-design/pro-components
|
https://api.github.com/repos/ant-design/pro-components
|
closed
|
期望 protable的 "Search 搜索表单" 可以在新的一行👑 [需求]
|
table 🎖️ featrue 🎨 By Design
|
### 🥰 需求描述
期望 protable的 "Search 搜索表单" 可以在新的一行
<!--
详细地描述需求,让大家都能理解
-->
### 🧐 解决方案
searchButtonInline = true
<!--
如果你有解决方案,在这里清晰地阐述
-->
### 🚑 其他信息
<!--
如截图等其他信息可以贴在这里
-->
|
1.0
|
期望 protable的 "Search 搜索表单" 可以在新的一行👑 [需求] - ### 🥰 需求描述
期望 protable的 "Search 搜索表单" 可以在新的一行
<!--
详细地描述需求,让大家都能理解
-->
### 🧐 解决方案
searchButtonInline = true
<!--
如果你有解决方案,在这里清晰地阐述
-->
### 🚑 其他信息
<!--
如截图等其他信息可以贴在这里
-->
|
non_test
|
期望 protable的 search 搜索表单 可以在新的一行👑 🥰 需求描述 期望 protable的 search 搜索表单 可以在新的一行 详细地描述需求,让大家都能理解 🧐 解决方案 searchbuttoninline true 如果你有解决方案,在这里清晰地阐述 🚑 其他信息 如截图等其他信息可以贴在这里
| 0
|
238,573
| 19,725,765,174
|
IssuesEvent
|
2022-01-13 19:47:53
|
dryicejayr/project1
|
https://api.github.com/repos/dryicejayr/project1
|
opened
|
Nucleus - [High] - 440039
|
test
|
Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target(s): Asset name: 192.168.56.103
Asset name: 192.168.56.104
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-cr-announce/2018-June/005268.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:1854: centos 6 (https://lists.centos.org/pipermail/centos-cr-announce/2018-June/005268.html)
References:
QID:440039
CVE:CVE-2018-3639, CVE-2017-7308, CVE-2012-6701, CVE-2015-8830, CVE-2016-8650, CVE-2017-2671, CVE-2017-6001, CVE-2017-7616, CVE-2017-7889, CVE-2017-8890, CVE-2017-9075, CVE-2017-9076, CVE-2017-9077, CVE-2017-12190, CVE-2017-15121, CVE-2017-18203, CVE-2018-1130, CVE-2018-5803
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2018:1854 centos 6
Bugtraq IDs:104232, 101911, 102128, 103184, 97407, 96264, 97234, 97527, 97690, 98562, 98597, 98586, 98583, 94532
Severity: High
Date Discovered: 2021-06-01 10:43:00
Nucleus Notification Rules Triggered: test
Project Name: 3767
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/15000074/NDQwMDM5/UVVBTFlT/VnVsbg--/false/MTUwMDAwNzQ-/c3VtbWFyeQ--/false
|
1.0
|
Nucleus - [High] - 440039 - Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target(s): Asset name: 192.168.56.103
Asset name: 192.168.56.104
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-cr-announce/2018-June/005268.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:1854: centos 6 (https://lists.centos.org/pipermail/centos-cr-announce/2018-June/005268.html)
References:
QID:440039
CVE:CVE-2018-3639, CVE-2017-7308, CVE-2012-6701, CVE-2015-8830, CVE-2016-8650, CVE-2017-2671, CVE-2017-6001, CVE-2017-7616, CVE-2017-7889, CVE-2017-8890, CVE-2017-9075, CVE-2017-9076, CVE-2017-9077, CVE-2017-12190, CVE-2017-15121, CVE-2017-18203, CVE-2018-1130, CVE-2018-5803
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2018:1854 centos 6
Bugtraq IDs:104232, 101911, 102128, 103184, 97407, 96264, 97234, 97527, 97690, 98562, 98597, 98586, 98583, 94532
Severity: High
Date Discovered: 2021-06-01 10:43:00
Nucleus Notification Rules Triggered: test
Project Name: 3767
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/15000074/NDQwMDM5/UVVBTFlT/VnVsbg--/false/MTUwMDAwNzQ-/c3VtbWFyeQ--/false
|
test
|
nucleus source qualys finding description centos has released security update for kernel to fix the vulnerabilities affected products centos impact this vulnerability could be exploited to gain complete access to sensitive information malicious users could also use this vulnerability to change all the contents or configuration on the system additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable target s asset name asset name solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory centos for updates and patch information patch following are links for downloading patches to fix the vulnerabilities cesa centos references qid cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve category centos pci flagged yes vendor references cesa centos bugtraq ids severity high date discovered nucleus notification rules triggered test project name please see nucleus for more information on these vulnerabilities
| 1
|
339,654
| 30,463,362,537
|
IssuesEvent
|
2023-07-17 08:38:44
|
openobserve/openobserve
|
https://api.github.com/repos/openobserve/openobserve
|
closed
|
progress indicator for query execution
|
testing In Progress
|
### Which OpenObserve functionalities are relevant/related to the feature request?
log search
### Description
Currently when you start to run a query there is a pop up that appears which says `Waiting for response` and goes away. There is no indication on the screen if the query is running until you get the response.
### Proposed solution
Displaying an hourglass or something similar as progress indicator will be greatly useful for the user.
### Alternatives considered
na
|
1.0
|
progress indicator for query execution - ### Which OpenObserve functionalities are relevant/related to the feature request?
log search
### Description
Currently when you start to run a query there is a pop up that appears which says `Waiting for response` and goes away. There is no indication on the screen if the query is running until you get the response.
### Proposed solution
Displaying an hourglass or something similar as progress indicator will be greatly useful for the user.
### Alternatives considered
na
|
test
|
progress indicator for query execution which openobserve functionalities are relevant related to the feature request log search description currently when you start to run a query there is a pop up that appears which says waiting for response and goes away there is no indication on the screen if the query is running until you get the response proposed solution displaying an hourglass or something similar as progress indicator will be greatly useful for the user alternatives considered na
| 1
|
505,691
| 14,643,829,287
|
IssuesEvent
|
2020-12-25 19:00:51
|
bounswe/bounswe2020group7
|
https://api.github.com/repos/bounswe/bounswe2020group7
|
closed
|
Backend- User Search Sorting Functionality
|
Priority: Medium Subteam: Backend Type: Improvement
|
Implement the sorting functionality for user search according to following requirement:
1.1.6.5 Users and guests should be able to sort search results of users according to the sorting criteria as alphabetical order.
|
1.0
|
Backend- User Search Sorting Functionality - Implement the sorting functionality for user search according to following requirement:
1.1.6.5 Users and guests should be able to sort search results of users according to the sorting criteria as alphabetical order.
|
non_test
|
backend user search sorting functionality implement the sorting functionality for user search according to following requirement users and guests should be able to sort search results of users according to the sorting criteria as alphabetical order
| 0
|
165,783
| 12,879,872,077
|
IssuesEvent
|
2020-07-12 01:27:28
|
osquery/osquery
|
https://api.github.com/repos/osquery/osquery
|
closed
|
Create tests for table `block_devices`
|
Linux good-first-issue macOS test
|
## Create tests for table `block_devices`
- Create header file for the table implementation, if one is not exists.
- In test, query the table and check if retrieved columns (name and types) match the columns from table spec.
- If there is any guarantee to number of rows (e.g. only 1 record in every query result, more than 3 records or something else) check it.
- Test the implementation details of the table, if it possible.
Table spec: `specs/posix/block_devices.table`
Source files:
- `osquery/tables/system/linux/block_devices.cpp`
- `osquery/tables/system/darwin/block_devices.cpp`
Table generating function: `genBlockDevs()`
|
1.0
|
Create tests for table `block_devices` - ## Create tests for table `block_devices`
- Create header file for the table implementation, if one is not exists.
- In test, query the table and check if retrieved columns (name and types) match the columns from table spec.
- If there is any guarantee to number of rows (e.g. only 1 record in every query result, more than 3 records or something else) check it.
- Test the implementation details of the table, if it possible.
Table spec: `specs/posix/block_devices.table`
Source files:
- `osquery/tables/system/linux/block_devices.cpp`
- `osquery/tables/system/darwin/block_devices.cpp`
Table generating function: `genBlockDevs()`
|
test
|
create tests for table block devices create tests for table block devices create header file for the table implementation if one is not exists in test query the table and check if retrieved columns name and types match the columns from table spec if there is any guarantee to number of rows e g only record in every query result more than records or something else check it test the implementation details of the table if it possible table spec specs posix block devices table source files osquery tables system linux block devices cpp osquery tables system darwin block devices cpp table generating function genblockdevs
| 1
|
55,973
| 8,037,862,592
|
IssuesEvent
|
2018-07-30 13:54:55
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Rerun of terraform plan without any config change will cause force new of AKS resource
|
documentation service/kubernetes-cluster upstream-microsoft
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
v0.11.7
### Affected Resource(s)
* azurerm_kubernetes_cluster
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_resource_group" "test" {
name = "acctestRG-2022"
location = "westeurope"
}
resource "azurerm_virtual_network" "test" {
name = "acctestvirtnet2022"
address_space = ["10.1.0.0/16"]
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
tags {
environment = "Testing"
}
}
resource "azurerm_subnet" "test" {
name = "acctestsubnet2022"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.1.0.0/24"
}
resource "azurerm_kubernetes_cluster" "test" {
name = "acctestaks2022"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
dns_prefix = "acctestaks2022"
kubernetes_version = "1.7.7"
linux_profile {
admin_username = "acctestuser2022"
ssh_key {
key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqaZoyiz1qbdOQ8xEf6uEu1cCwYowo5FHtsBhqLoDnnp7KUTEBN+L2NxRIfQ781rxV6Iq5jSav6b2Q8z5KiseOlvKA/RF2wqU0UPYqQviQhLmW6THTpmrv/YkUCuzxDpsH7DUDhZcwySLKVVe0Qm3+5N2Ta6UYH3lsDf9R9wTP2K/+vAnflKebuypNlmocIvakFWoZda18FOmsOoIVXQ8HWFNCuw9ZCunMSN62QGamCe3dL5cXlkgHYv7ekJE15IA9aOJcM7e90oeTqo+7HTcWfdu0qQqPWY5ujyMw/llas8tsXY85LFqRnr3gJ02bAscjc477+X+j/gkpFoN1QEmt terraform@demo.tld"
}
}
agent_pool_profile {
name = "default"
count = "2"
vm_size = "Standard_DS2_v2"
vnet_subnet_id = "${azurerm_subnet.test.id}"
}
service_principal {
client_id = "################"
client_secret = "################"
}
network_profile {
network_plugin = "azure"
pod_cidr = "10.244.0.0/24"
}
}
```
### Expected Behavior
Rerun of `terraform plan` should be not have any change.
### Actual Behavior
```sh
network_profile.0.pod_cidr: "" => "10.244.0.0/24" (forces new resource)
```
<!--- What actually happened? --->
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform init`
2. `terraform plan`
3. `terraform apply`
4. `terraform plan`
|
1.0
|
Rerun of terraform plan without any config change will cause force new of AKS resource - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
v0.11.7
### Affected Resource(s)
* azurerm_kubernetes_cluster
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_resource_group" "test" {
name = "acctestRG-2022"
location = "westeurope"
}
resource "azurerm_virtual_network" "test" {
name = "acctestvirtnet2022"
address_space = ["10.1.0.0/16"]
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
tags {
environment = "Testing"
}
}
resource "azurerm_subnet" "test" {
name = "acctestsubnet2022"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.1.0.0/24"
}
resource "azurerm_kubernetes_cluster" "test" {
name = "acctestaks2022"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
dns_prefix = "acctestaks2022"
kubernetes_version = "1.7.7"
linux_profile {
admin_username = "acctestuser2022"
ssh_key {
key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqaZoyiz1qbdOQ8xEf6uEu1cCwYowo5FHtsBhqLoDnnp7KUTEBN+L2NxRIfQ781rxV6Iq5jSav6b2Q8z5KiseOlvKA/RF2wqU0UPYqQviQhLmW6THTpmrv/YkUCuzxDpsH7DUDhZcwySLKVVe0Qm3+5N2Ta6UYH3lsDf9R9wTP2K/+vAnflKebuypNlmocIvakFWoZda18FOmsOoIVXQ8HWFNCuw9ZCunMSN62QGamCe3dL5cXlkgHYv7ekJE15IA9aOJcM7e90oeTqo+7HTcWfdu0qQqPWY5ujyMw/llas8tsXY85LFqRnr3gJ02bAscjc477+X+j/gkpFoN1QEmt terraform@demo.tld"
}
}
agent_pool_profile {
name = "default"
count = "2"
vm_size = "Standard_DS2_v2"
vnet_subnet_id = "${azurerm_subnet.test.id}"
}
service_principal {
client_id = "################"
client_secret = "################"
}
network_profile {
network_plugin = "azure"
pod_cidr = "10.244.0.0/24"
}
}
```
### Expected Behavior
Rerun of `terraform plan` should be not have any change.
### Actual Behavior
```sh
network_profile.0.pod_cidr: "" => "10.244.0.0/24" (forces new resource)
```
<!--- What actually happened? --->
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform init`
2. `terraform plan`
3. `terraform apply`
4. `terraform plan`
|
non_test
|
rerun of terraform plan without any config change will cause force new of aks resource community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version affected resource s azurerm kubernetes cluster terraform configuration files hcl resource azurerm resource group test name acctestrg location westeurope resource azurerm virtual network test name address space location azurerm resource group test location resource group name azurerm resource group test name tags environment testing resource azurerm subnet test name resource group name azurerm resource group test name virtual network name azurerm virtual network test name address prefix resource azurerm kubernetes cluster test name location azurerm resource group test location resource group name azurerm resource group test name dns prefix kubernetes version linux profile admin username ssh key key data ssh rsa x j terraform demo tld agent pool profile name default count vm size standard vnet subnet id azurerm subnet test id service principal client id client secret network profile network plugin azure pod cidr expected behavior rerun of terraform plan should be not have any change actual behavior sh network profile pod cidr forces new resource steps to reproduce terraform init terraform plan terraform apply terraform plan
| 0
|
105,681
| 9,100,196,401
|
IssuesEvent
|
2019-02-20 07:47:07
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
opened
|
Test : ApiV1TestSuitesGetQueryParamPageSla
|
test
|
Project : Test
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MDM1MjdlODEtYzZhOS00NjYzLWEyOGQtNzFlMmZhMGRhZmMy; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:47:02 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/test-suites?page=1001&pageSize=1001
Request :
Response :
{
"timestamp" : "2019-02-20T07:47:02.754+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/test-suites"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 1000] resolved-to [404 == 200 AND 567 < 1000] result [Failed]
--- FX Bot ---
|
1.0
|
Test : ApiV1TestSuitesGetQueryParamPageSla - Project : Test
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MDM1MjdlODEtYzZhOS00NjYzLWEyOGQtNzFlMmZhMGRhZmMy; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 07:47:02 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/test-suites?page=1001&pageSize=1001
Request :
Response :
{
"timestamp" : "2019-02-20T07:47:02.754+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/test-suites"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 1000] resolved-to [404 == 200 AND 567 < 1000] result [Failed]
--- FX Bot ---
|
test
|
test project test job default env default category null tags null severity null region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api test suites logs assertion resolved to result fx bot
| 1
|
269,554
| 28,960,217,100
|
IssuesEvent
|
2023-05-10 01:24:14
|
dpteam/RK3188_TABLET
|
https://api.github.com/repos/dpteam/RK3188_TABLET
|
reopened
|
CVE-2021-28712 (Medium) detected in linuxv3.0
|
Mend: dependency security vulnerability
|
## CVE-2021-28712 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/xen-netfront.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Rogue backends can cause DoS of guests via high frequency events T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen offers the ability to run PV backends in regular unprivileged guests, typically referred to as "driver domains". Running PV backends in driver domains has one primary security advantage: if a driver domain gets compromised, it doesn't have the privileges to take over the system. However, a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a Denial of Service in the guest due to trying to service interrupts for elongated amounts of time. There are three affected backends: * blkfront patch 1, CVE-2021-28711 * netfront patch 2, CVE-2021-28712 * hvc_xen (console) patch 3, CVE-2021-28713
<p>Publish Date: 2022-01-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28712>CVE-2021-28712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28712">https://www.linuxkernelcves.com/cves/CVE-2021-28712</a></p>
<p>Release Date: 2022-01-05</p>
<p>Fix Resolution: v4.4.296,v4.9.294,v4.14.259,v4.19.222,v5.4.168,v5.10.88,v5.15.11,v5.16-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-28712 (Medium) detected in linuxv3.0 - ## CVE-2021-28712 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/xen-netfront.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Rogue backends can cause DoS of guests via high frequency events T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen offers the ability to run PV backends in regular unprivileged guests, typically referred to as "driver domains". Running PV backends in driver domains has one primary security advantage: if a driver domain gets compromised, it doesn't have the privileges to take over the system. However, a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a Denial of Service in the guest due to trying to service interrupts for elongated amounts of time. There are three affected backends: * blkfront patch 1, CVE-2021-28711 * netfront patch 2, CVE-2021-28712 * hvc_xen (console) patch 3, CVE-2021-28713
<p>Publish Date: 2022-01-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28712>CVE-2021-28712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28712">https://www.linuxkernelcves.com/cves/CVE-2021-28712</a></p>
<p>Release Date: 2022-01-05</p>
<p>Fix Resolution: v4.4.296,v4.9.294,v4.14.259,v4.19.222,v5.4.168,v5.10.88,v5.15.11,v5.16-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files drivers net xen netfront c vulnerability details rogue backends can cause dos of guests via high frequency events t xen offers the ability to run pv backends in regular unprivileged guests typically referred to as driver domains running pv backends in driver domains has one primary security advantage if a driver domain gets compromised it doesn t have the privileges to take over the system however a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a denial of service in the guest due to trying to service interrupts for elongated amounts of time there are three affected backends blkfront patch cve netfront patch cve hvc xen console patch cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
12,187
| 4,385,708,712
|
IssuesEvent
|
2016-08-08 09:56:26
|
mosbth/cimage
|
https://api.github.com/repos/mosbth/cimage
|
closed
|
Make CImage less hardcoded on what subclasses are used by injecting class names.
|
code quality feature
|
Enable in the config file to set what class names to use as subclasses for CImage. To ease up on the dependencies and introduce dependency injection.
It looks like this in the config file for now.
```
/**
* Class names to use, to ease dependency injection. You can change Class
* name if you want to use your own class instead. This is a way to extend
* the codebase.
*
* Default values:
* CImage: CImage
* CCache: CCache
* CFastTrackCache: CFastTrackCache
*/
//'CImage' => 'CImage',
//'CCache' => 'CCache',
//'CFastTrackCache' => 'CFastTrackCache',
```
Change the classname in the config file and `img.php` will inject it into CImage that will use it.
Partly implemented for some classes, for the time being.
|
1.0
|
Make CImage less hardcoded on what subclasses are used by injecting class names. - Enable in the config file to set what class names to use as subclasses for CImage. To ease up on the dependencies and introduce dependency injection.
It looks like this in the config file for now.
```
/**
* Class names to use, to ease dependency injection. You can change Class
* name if you want to use your own class instead. This is a way to extend
* the codebase.
*
* Default values:
* CImage: CImage
* CCache: CCache
* CFastTrackCache: CFastTrackCache
*/
//'CImage' => 'CImage',
//'CCache' => 'CCache',
//'CFastTrackCache' => 'CFastTrackCache',
```
Change the classname in the config file and `img.php` will inject it into CImage that will use it.
Partly implemented for some classes, for the time being.
|
non_test
|
make cimage less hardcoded on what subclasses are used by injecting class names enable in the config file to set what class names to use as subclasses for cimage to ease up on the dependencies and introduce dependency injection it looks like this in the config file for now class names to use to ease dependency injection you can change class name if you want to use your own class instead this is a way to extend the codebase default values cimage cimage ccache ccache cfasttrackcache cfasttrackcache cimage cimage ccache ccache cfasttrackcache cfasttrackcache change the classname in the config file and img php will inject it into cimage that will use it partly implemented for some classes for the time being
| 0
|
417,987
| 12,191,423,431
|
IssuesEvent
|
2020-04-29 11:06:30
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
closed
|
Copy old backends configuration to a new Exporting engine configuration file
|
area/backends feature request priority/high
|
<!---
When creating a feature request please:
- Verify first that your issue is not already reported on GitHub
- Explain new feature briefly in "Feature idea summary" section
- Provide a clear and concise description of what you expect to happen.
--->
##### Feature idea summary
##### Expected behavior
|
1.0
|
Copy old backends configuration to a new Exporting engine configuration file - <!---
When creating a feature request please:
- Verify first that your issue is not already reported on GitHub
- Explain new feature briefly in "Feature idea summary" section
- Provide a clear and concise description of what you expect to happen.
--->
##### Feature idea summary
##### Expected behavior
|
non_test
|
copy old backends configuration to a new exporting engine configuration file when creating a feature request please verify first that your issue is not already reported on github explain new feature briefly in feature idea summary section provide a clear and concise description of what you expect to happen feature idea summary expected behavior
| 0
|
406,122
| 11,887,106,834
|
IssuesEvent
|
2020-03-28 00:10:54
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
cluster glassfish instance
|
Component: OSGi Priority: Major Stale Type: Task
|
How can I solve the following problem? What does it mean ?
[2016-02-05T16:12:27.897+0100] [glassfish 4.1] [SEVERE] [] [] [tid: _ThreadID=122 _ThreadName=Thread-9] [timeMillis: 1454685147897] [levelValue: 1000] [[
gogo: FileNotFoundException: /home/glassfish/glassfish4/glassfish/nodes/localhost-domain1/Coo1/config/noop=true (File o directory non esistente)]]
zito.2016
[2016-02-05T16:12:27.925+0100] [glassfish 4.1] [SEVERE] [] [] [tid: _ThreadID=122 _ThreadName=Thread-9] [timeMillis: 1454685147925] [levelValue: 1000] [[
java.io.FileNotFoundException: /home/glassfish/glassfish4/glassfish/nodes/localhost-domain1/Coo1/config/noop=true (File o directory non esistente)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.felix.gogo.shell.Shell.readScript(Shell.java:218)
at org.apache.felix.gogo.shell.Shell.gosh(Shell.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.felix.gogo.runtime.Reflective.method(Reflective.java:136)
at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:82)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:469)
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:395)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:89)
at org.apache.felix.gogo.shell.Activator.run(Activator.java:75)
at java.lang.Thread.run(Thread.java:745)]]
#### Affected Versions
[4.1]
|
1.0
|
cluster glassfish instance - How can I solve the following problem? What does it mean ?
[2016-02-05T16:12:27.897+0100] [glassfish 4.1] [SEVERE] [] [] [tid: _ThreadID=122 _ThreadName=Thread-9] [timeMillis: 1454685147897] [levelValue: 1000] [[
gogo: FileNotFoundException: /home/glassfish/glassfish4/glassfish/nodes/localhost-domain1/Coo1/config/noop=true (File o directory non esistente)]]
zito.2016
[2016-02-05T16:12:27.925+0100] [glassfish 4.1] [SEVERE] [] [] [tid: _ThreadID=122 _ThreadName=Thread-9] [timeMillis: 1454685147925] [levelValue: 1000] [[
java.io.FileNotFoundException: /home/glassfish/glassfish4/glassfish/nodes/localhost-domain1/Coo1/config/noop=true (File o directory non esistente)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.felix.gogo.shell.Shell.readScript(Shell.java:218)
at org.apache.felix.gogo.shell.Shell.gosh(Shell.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.felix.gogo.runtime.Reflective.method(Reflective.java:136)
at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:82)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:469)
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:395)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:89)
at org.apache.felix.gogo.shell.Activator.run(Activator.java:75)
at java.lang.Thread.run(Thread.java:745)]]
#### Affected Versions
[4.1]
|
non_test
|
cluster glassfish instance how can i solve the following problem what does it mean gogo filenotfoundexception home glassfish glassfish nodes localhost config noop true file o directory non esistente zito java io filenotfoundexception home glassfish glassfish nodes localhost config noop true file o directory non esistente at java io fileinputstream native method at java io fileinputstream open fileinputstream java at java io fileinputstream fileinputstream java at java io fileinputstream fileinputstream java at sun net at sun net at org apache felix gogo shell shell readscript shell java at org apache felix gogo shell shell gosh shell java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache felix gogo runtime reflective method reflective java at org apache felix gogo runtime commandproxy execute commandproxy java at org apache felix gogo runtime closure executecmd closure java at org apache felix gogo runtime closure executestatement closure java at org apache felix gogo runtime pipe run pipe java at org apache felix gogo runtime closure execute closure java at org apache felix gogo runtime closure execute closure java at org apache felix gogo runtime commandsessionimpl execute commandsessionimpl java at org apache felix gogo shell activator run activator java at java lang thread run thread java affected versions
| 0
|
25,196
| 4,148,880,404
|
IssuesEvent
|
2016-06-15 12:45:59
|
Gapminder/ddf-validation
|
https://api.github.com/repos/Gapminder/ddf-validation
|
closed
|
INCORRECT_IDENTIFIER rule providing
|
effort1: medium (half-day) priority1: urgent status: done (tested) type: document type: enhancement type: feature
|
* **Rule name:** Should be filled if this issue depends on particular rule especially during new rule creation. This name should be a valid JS variable name and should be used for connecting with code.
`INCORRECT_IDENTIFIER`
* **Rule test folder:** Should be filled if type of Request is `rule`. Needed for documentation auto generation.
`test/fixtures/rules-cases/incorrect-identifier`
* **Rule description:** Should be filled if type of Request is `rule`. Needed for documentation auto generation.
Entity identifiers and concept identifiers can only contain lowercase alphanumeric characters and underscores.
* **Examples of correct data:**
`ddf--concepts.csv`
```
concept,concept_type,domain,name,drill_up
name,string,,,
geo,entity_domain,,,
domain,string,,Domain,
country,entity_set,geo,Country,
capital,entity_set,geo,Capital,
year,time,,year,
```
`ddf--entities--geo--country.csv`
```
geo,name,lat,lng,is--region,is--country,is--capital
and,Andorra,,,0,1,0
afg,Afghanistan,,,0,1,0
```
* **Examples of incorrect data:**
`ddf--concepts.csv`
```
concept,concept_type,domain,name,drill_up
name,string,,,
geo,entity_domain,,,
dom*ain,string,,Domain,
Country,entity_set,geo,Country,
capital,entity_set,geo,Capital,
year-%,time,,year,
```
`ddf--entities--geo--country.csv`
```
geo,name,lat,lng,is--region,is--country,is--capital
An-d,Andorra,,,0,1,0
$&*,Afghanistan,,,0,1,0
```
* **Scenarios** Should be filled if type of Request is `rule`.
```
when dataset is correct
any issue should NOT be found for this rule
```
```
when dataset is NOT correct
issues in accordance with wrong datapoint records quantity should be detected for this rule
output data for any issue should be expected
```
* **Output data format** Additional data that depends on particular issue type. Should be filled if type of Request is `rule`.
wrong identifier value
|
1.0
|
INCORRECT_IDENTIFIER rule providing - * **Rule name:** Should be filled if this issue depends on particular rule especially during new rule creation. This name should be a valid JS variable name and should be used for connecting with code.
`INCORRECT_IDENTIFIER`
* **Rule test folder:** Should be filled if type of Request is `rule`. Needed for documentation auto generation.
`test/fixtures/rules-cases/incorrect-identifier`
* **Rule description:** Should be filled if type of Request is `rule`. Needed for documentation auto generation.
Entity identifiers and concept identifiers can only contain lowercase alphanumeric characters and underscores.
* **Examples of correct data:**
`ddf--concepts.csv`
```
concept,concept_type,domain,name,drill_up
name,string,,,
geo,entity_domain,,,
domain,string,,Domain,
country,entity_set,geo,Country,
capital,entity_set,geo,Capital,
year,time,,year,
```
`ddf--entities--geo--country.csv`
```
geo,name,lat,lng,is--region,is--country,is--capital
and,Andorra,,,0,1,0
afg,Afghanistan,,,0,1,0
```
* **Examples of incorrect data:**
`ddf--concepts.csv`
```
concept,concept_type,domain,name,drill_up
name,string,,,
geo,entity_domain,,,
dom*ain,string,,Domain,
Country,entity_set,geo,Country,
capital,entity_set,geo,Capital,
year-%,time,,year,
```
`ddf--entities--geo--country.csv`
```
geo,name,lat,lng,is--region,is--country,is--capital
An-d,Andorra,,,0,1,0
$&*,Afghanistan,,,0,1,0
```
* **Scenarios** Should be filled if type of Request is `rule`.
```
when dataset is correct
any issue should NOT be found for this rule
```
```
when dataset is NOT correct
issues in accordance with wrong datapoint records quantity should be detected for this rule
output data for any issue should be expected
```
* **Output data format** Additional data that depends on particular issue type. Should be filled if type of Request is `rule`.
wrong identifier value
|
test
|
incorrect identifier rule providing rule name should be filled if this issue depends on particular rule especially during new rule creation this name should be a valid js variable name and should be used for connecting with code incorrect identifier rule test folder should be filled if type of request is rule needed for documentation auto generation test fixtures rules cases incorrect identifier rule description should be filled if type of request is rule needed for documentation auto generation entity identifiers and concept identifiers can only contain lowercase alphanumeric characters and underscores examples of correct data ddf concepts csv concept concept type domain name drill up name string geo entity domain domain string domain country entity set geo country capital entity set geo capital year time year ddf entities geo country csv geo name lat lng is region is country is capital and andorra afg afghanistan examples of incorrect data ddf concepts csv concept concept type domain name drill up name string geo entity domain dom ain string domain country entity set geo country capital entity set geo capital year time year ddf entities geo country csv geo name lat lng is region is country is capital an d andorra afghanistan scenarios should be filled if type of request is rule when dataset is correct any issue should not be found for this rule when dataset is not correct issues in accordance with wrong datapoint records quantity should be detected for this rule output data for any issue should be expected output data format additional data that depends on particular issue type should be filled if type of request is rule wrong identifier value
| 1
|
253,176
| 21,659,445,313
|
IssuesEvent
|
2022-05-06 17:31:11
|
dagster-io/dagster
|
https://api.github.com/repos/dagster-io/dagster
|
closed
|
Hard to visually align config keys in Dagit Playground
|
config-editor usability page:job-launchpad flow:dev+test dagit
|
### Problem to solve
The config editor in the Dagit Playground shows spaces as bullets. For long configs, I find the bullets distracting, and not useful for lining up config keys. I often make indentation errors because it is really hard to see where things line up.
### Intended users
* Pipeline Developer
### User experience goal
It should be visually clear how config keys are indented and line up with each other. This is especially important for YAML where space based alignment matters.
### Proposal
I think the Playground should show vertical indentation guides for alignment as a default. Visualizing spaces as bullets can be an option.
**Current State (spaces visualized as bullets)**:

I was trying to add the `normalize_column_names` key on line 35, but had a really hard time determining if it was on the same level as the `read` key on line 18.
**Proposed State (indentation guides)**:
Here is a screenshot of Nova's editor with rainbow indentation guides, as an example.

The indentation guides make it visually clear that the `normalize_column_names` key is aligned with `read`.
|
1.0
|
Hard to visually align config keys in Dagit Playground - ### Problem to solve
The config editor in the Dagit Playground shows spaces as bullets. For long configs, I find the bullets distracting, and not useful for lining up config keys. I often make indentation errors because it is really hard to see where things line up.
### Intended users
* Pipeline Developer
### User experience goal
It should be visually clear how config keys are indented and line up with each other. This is especially important for YAML where space based alignment matters.
### Proposal
I think the Playground should show vertical indentation guides for alignment as a default. Visualizing spaces as bullets can be an option.
**Current State (spaces visualized as bullets)**:

I was trying to add the `normalize_column_names` key on line 35, but had a really hard time determining if it was on the same level as the `read` key on line 18.
**Proposed State (indentation guides)**:
Here is a screenshot of Nova's editor with rainbow indentation guides, as an example.

The indentation guides make it visually clear that the `normalize_column_names` key is aligned with `read`.
|
test
|
hard to visually align config keys in dagit playground problem to solve the config editor in the dagit playground shows spaces as bullets for long configs i find the bullets distracting and not useful for lining up config keys i often make indentation errors because it is really hard to see where things line up intended users pipeline developer user experience goal it should be visually clear how config keys are indented and line up with each other this is especially important for yaml where space based alignment matters proposal i think the playground should show vertical indentation guides for alignment as a default visualizing spaces as bullets can be an option current state spaces visualized as bullets i was trying to add the normalize column names key on line but had a really hard time determining if it was on the same level as the read key on line proposed state indentation guides here is a screenshot of nova s editor with rainbow indentation guides as an example the indentation guides make it visually clear that the normalize column names key is aligned with read
| 1
|
231,599
| 18,780,790,352
|
IssuesEvent
|
2021-11-08 06:14:49
|
pingcap/ticdc
|
https://api.github.com/repos/pingcap/ticdc
|
opened
|
testRelaySuite.TestHandleEvent
|
component/test
|
### Which jobs are flaking?
unit-test
### Which test(s) are flaking?
```
[2021-11-08T06:04:15.013Z] FAIL: relay_test.go:404: testRelaySuite.TestHandleEvent
[2021-11-08T06:04:15.013Z]
[2021-11-08T06:04:15.013Z] relay_test.go:501:
[2021-11-08T06:04:15.013Z] c.Assert(pos, DeepEquals, binlogPos)
[2021-11-08T06:04:15.013Z] ... obtained mysql.Position = mysql.Position{Name:"mysql-bin.666888", Pos:0x4d2} ("(mysql-bin.666888, 1234)")
[2021-11-08T06:04:15.013Z] ... expected mysql.Position = mysql.Position{Name:"mysql-bin.666888", Pos:0x4} ("(mysql-bin.666888, 4)")
[2021-11-08T06:04:15.013Z]
```
### Jenkins logs or GitHub Actions link
https://ci2.pingcap.net/blue/organizations/jenkins/cdc_ghpr_test/detail/cdc_ghpr_test/11001/pipeline
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately?
|
1.0
|
testRelaySuite.TestHandleEvent - ### Which jobs are flaking?
unit-test
### Which test(s) are flaking?
```
[2021-11-08T06:04:15.013Z] FAIL: relay_test.go:404: testRelaySuite.TestHandleEvent
[2021-11-08T06:04:15.013Z]
[2021-11-08T06:04:15.013Z] relay_test.go:501:
[2021-11-08T06:04:15.013Z] c.Assert(pos, DeepEquals, binlogPos)
[2021-11-08T06:04:15.013Z] ... obtained mysql.Position = mysql.Position{Name:"mysql-bin.666888", Pos:0x4d2} ("(mysql-bin.666888, 1234)")
[2021-11-08T06:04:15.013Z] ... expected mysql.Position = mysql.Position{Name:"mysql-bin.666888", Pos:0x4} ("(mysql-bin.666888, 4)")
[2021-11-08T06:04:15.013Z]
```
### Jenkins logs or GitHub Actions link
https://ci2.pingcap.net/blue/organizations/jenkins/cdc_ghpr_test/detail/cdc_ghpr_test/11001/pipeline
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately?
|
test
|
testrelaysuite testhandleevent which jobs are flaking unit test which test s are flaking fail relay test go testrelaysuite testhandleevent relay test go c assert pos deepequals binlogpos obtained mysql position mysql position name mysql bin pos mysql bin expected mysql position mysql position name mysql bin pos mysql bin jenkins logs or github actions link anything else we need to know does this test exist for other branches as well has there been a high frequency of failure lately
| 1
|
108,340
| 9,302,024,701
|
IssuesEvent
|
2019-03-24 04:54:19
|
celery/celery
|
https://api.github.com/repos/celery/celery
|
closed
|
Unable to create SSL broker/backend connection to redis
|
Component: Redis Broker Component: Redis Results Backend Priority: Critical Severity: Major Status: Confirmed ✔ Status: Has Testcase ✔
|
I'm having some issues making an SSL/TLS connection to redis for the Celery broker and backend.
I'm using:
Celery: 4.3.0rc2 (master)
Python: 3.6
OS: Mac OS X.
Related dependency versions: kombu (4.4.0), redis-py (3.2.0)
Celery works fine using a non-SSL connection to redis. My redis/SSL setup uses redis behind stunnel and I can successfully connect to and use this deployment via py-redis directly and via other libraries.
It looks like this is related to #3830, however that issue seemed to have been resolved by adding the `broker_use_ssl` option and adding this option doesn't appear to be having any effect here.
To setup SSL, I'm adding the `broker_use_ssl` attribute to configuration parameters as described in the [docs](http://docs.celeryproject.org/en/master/userguide/configuration.html#broker-use-ssl). I'm also adding the same set of parameters for the `redis_backend_use_ssl` [option](http://docs.celeryproject.org/en/master/userguide/configuration.html#broker-use-ssl). I notice there is some information here about adding the parameters in the query string but I've opted to stick with providing the dict of parameters.
```python
broker_url = 'rediss://localhost:6380'
key_file = '/path/to/client.key'
cert_file = '/path/to/client.crt'
ca_file = '/path/to/CAcert.pem'
app = Celery('app', broker=broker_url, backend=broker_url,
broker_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
},
redis_backend_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
})
app.connection().connect()
app.send_task('a_task')
```
When I attempt to make and use the connection as shown above, I get an error:
```python
ValueError: A rediss:// URL must have parameter ssl_cert_reqs be CERT_REQUIRED, CERT_OPTIONAL, or CERT_NONE
```
It looks like the `rediss` URL scheme is being picked up but the other parameters are not. Apologies if I'm missing something in the documentation or elsewhere.
Some investigation suggests that the `broker_use_ssl` and `redis_backend_use_ssl` options are not being carried through to the connection settings and, indeed, looking at the [\_\_init\_\_ function](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L228) for the Celery object, I can't see that these are being used anywhere. Stepping through the code, they don't seem to be present by the time the [`connection` function](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L784) is reached and `ssl` in the input parameters to the `connection` function is set to `None`.
As a test, I've tried making a couple of additions to the Celery `__init__` function to add the `broker_use_ssl` and `redis_backed_use_ssl` to the configuration:
```python
self.__autoset('broker_use_ssl',
(kwargs['broker_use_ssl'] if 'broker_use_ssl'
in kwargs else None))
self.__autoset('redis_backend_use_ssl',
(kwargs['redis_backend_use_ssl'] if
'redis_backend_use_ssl' in kwargs else None))
```
This adds the provided options to the Celery `conf`. It flags up another issue which is that the SSL parameters used for the backend connection use different names to those used for the broker connection - the documentation says the values are the same as `broker_use_ssl` (although the correct values are shown in the example of using query string parameters). I assume this is one of the things being handled under issue #4812.
A further point that I'm unclear about is that in the `_connection` function, there is a [call](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L840) to get the `broker_use_ssl` parameters `ssl=self.either('broker_use_ssl', ssl)` and this returns `None`. However, if I manually create an app object and then call `either`, it returns the `broker_use_ssl` parameters correctly (parameters as per example above):
```python
app = Celery('app', broker=broker_url, backend=broker_url,
broker_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
},
redis_backend_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
})
app.either('broker_use_ssl', None)
# Returns:
# {'keyfile': '/path/to/client.key',
# 'certfile': '/path/to/client.crt',
# 'ca_certs': '/path/to/CAcert.pem',
# 'cert_reqs': <VerifyMode.CERT_REQUIRED: 2>}
```
So, I'm unclear if I'm missing something and taking completely the wrong approach here or whether there are some issues with the SSL implementation in 4.3.0rc2.
Thanks.
|
1.0
|
Unable to create SSL broker/backend connection to redis - I'm having some issues making an SSL/TLS connection to redis for the Celery broker and backend.
I'm using:
Celery: 4.3.0rc2 (master)
Python: 3.6
OS: Mac OS X.
Related dependency versions: kombu (4.4.0), redis-py (3.2.0)
Celery works fine using a non-SSL connection to redis. My redis/SSL setup uses redis behind stunnel and I can successfully connect to and use this deployment via py-redis directly and via other libraries.
It looks like this is related to #3830, however that issue seemed to have been resolved by adding the `broker_use_ssl` option and adding this option doesn't appear to be having any effect here.
To setup SSL, I'm adding the `broker_use_ssl` attribute to configuration parameters as described in the [docs](http://docs.celeryproject.org/en/master/userguide/configuration.html#broker-use-ssl). I'm also adding the same set of parameters for the `redis_backend_use_ssl` [option](http://docs.celeryproject.org/en/master/userguide/configuration.html#broker-use-ssl). I notice there is some information here about adding the parameters in the query string but I've opted to stick with providing the dict of parameters.
```python
broker_url = 'rediss://localhost:6380'
key_file = '/path/to/client.key'
cert_file = '/path/to/client.crt'
ca_file = '/path/to/CAcert.pem'
app = Celery('app', broker=broker_url, backend=broker_url,
broker_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
},
redis_backend_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
})
app.connection().connect()
app.send_task('a_task')
```
When I attempt to make and use the connection as shown above, I get an error:
```python
ValueError: A rediss:// URL must have parameter ssl_cert_reqs be CERT_REQUIRED, CERT_OPTIONAL, or CERT_NONE
```
It looks like the `rediss` URL scheme is being picked up but the other parameters are not. Apologies if I'm missing something in the documentation or elsewhere.
Some investigation suggests that the `broker_use_ssl` and `redis_backend_use_ssl` options are not being carried through to the connection settings and, indeed, looking at the [\_\_init\_\_ function](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L228) for the Celery object, I can't see that these are being used anywhere. Stepping through the code, they don't seem to be present by the time the [`connection` function](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L784) is reached and `ssl` in the input parameters to the `connection` function is set to `None`.
As a test, I've tried making a couple of additions to the Celery `__init__` function to add the `broker_use_ssl` and `redis_backed_use_ssl` to the configuration:
```python
self.__autoset('broker_use_ssl',
(kwargs['broker_use_ssl'] if 'broker_use_ssl'
in kwargs else None))
self.__autoset('redis_backend_use_ssl',
(kwargs['redis_backend_use_ssl'] if
'redis_backend_use_ssl' in kwargs else None))
```
This adds the provided options to the Celery `conf`. It flags up another issue which is that the SSL parameters used for the backend connection use different names to those used for the broker connection - the documentation says the values are the same as `broker_use_ssl` (although the correct values are shown in the example of using query string parameters). I assume this is one of the things being handled under issue #4812.
A further point that I'm unclear about is that in the `_connection` function, there is a [call](https://github.com/celery/celery/blob/53f82db46f1a110c61a758334ff4388772e91781/celery/app/base.py#L840) to get the `broker_use_ssl` parameters `ssl=self.either('broker_use_ssl', ssl)` and this returns `None`. However, if I manually create an app object and then call `either`, it returns the `broker_use_ssl` parameters correctly (parameters as per example above):
```python
app = Celery('app', broker=broker_url, backend=broker_url,
broker_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
},
redis_backend_use_ssl = {
'keyfile': key_file, 'certfile': cert_file,
'ca_certs': ca_file,
'cert_reqs': ssl.CERT_REQUIRED
})
app.either('broker_use_ssl', None)
# Returns:
# {'keyfile': '/path/to/client.key',
# 'certfile': '/path/to/client.crt',
# 'ca_certs': '/path/to/CAcert.pem',
# 'cert_reqs': <VerifyMode.CERT_REQUIRED: 2>}
```
So, I'm unclear if I'm missing something and taking completely the wrong approach here or whether there are some issues with the SSL implementation in 4.3.0rc2.
Thanks.
|
test
|
unable to create ssl broker backend connection to redis i m having some issues making an ssl tls connection to redis for the celery broker and backend i m using celery master python os mac os x related dependency versions kombu redis py celery works fine using a non ssl connection to redis my redis ssl setup uses redis behind stunnel and i can successfully connect to and use this deployment via py redis directly and via other libraries it looks like this is related to however that issue seemed to have been resolved by adding the broker use ssl option and adding this option doesn t appear to be having any effect here to setup ssl i m adding the broker use ssl attribute to configuration parameters as described in the i m also adding the same set of parameters for the redis backend use ssl i notice there is some information here about adding the parameters in the query string but i ve opted to stick with providing the dict of parameters python broker url rediss localhost key file path to client key cert file path to client crt ca file path to cacert pem app celery app broker broker url backend broker url broker use ssl keyfile key file certfile cert file ca certs ca file cert reqs ssl cert required redis backend use ssl keyfile key file certfile cert file ca certs ca file cert reqs ssl cert required app connection connect app send task a task when i attempt to make and use the connection as shown above i get an error python valueerror a rediss url must have parameter ssl cert reqs be cert required cert optional or cert none it looks like the rediss url scheme is being picked up but the other parameters are not apologies if i m missing something in the documentation or elsewhere some investigation suggests that the broker use ssl and redis backend use ssl options are not being carried through to the connection settings and indeed looking at the for the celery object i can t see that these are being used anywhere stepping through the code they don t seem to be present by the time the is reached and ssl in the input parameters to the connection function is set to none as a test i ve tried making a couple of additions to the celery init function to add the broker use ssl and redis backed use ssl to the configuration python self autoset broker use ssl kwargs if broker use ssl in kwargs else none self autoset redis backend use ssl kwargs if redis backend use ssl in kwargs else none this adds the provided options to the celery conf it flags up another issue which is that the ssl parameters used for the backend connection use different names to those used for the broker connection the documentation says the values are the same as broker use ssl although the correct values are shown in the example of using query string parameters i assume this is one of the things being handled under issue a further point that i m unclear about is that in the connection function there is a to get the broker use ssl parameters ssl self either broker use ssl ssl and this returns none however if i manually create an app object and then call either it returns the broker use ssl parameters correctly parameters as per example above python app celery app broker broker url backend broker url broker use ssl keyfile key file certfile cert file ca certs ca file cert reqs ssl cert required redis backend use ssl keyfile key file certfile cert file ca certs ca file cert reqs ssl cert required app either broker use ssl none returns keyfile path to client key certfile path to client crt ca certs path to cacert pem cert reqs so i m unclear if i m missing something and taking completely the wrong approach here or whether there are some issues with the ssl implementation in thanks
| 1
|
191,355
| 14,594,037,049
|
IssuesEvent
|
2020-12-20 02:51:18
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
rootfs/node-fencing: vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go; 3 LoC
|
fresh test tiny vendored
|
Found a possible issue in [rootfs/node-fencing](https://www.github.com/rootfs/node-fencing) at [vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go#L573-L575)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to ds at line 574 may start a goroutine
[Click here to see the code in its original context.](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go#L573-L575)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, ds := range item.daemonSets {
nodeController.daemonSetInformer.Informer().GetStore().Add(&ds)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b78deb66758bdffcf65efe25d2894b6a6343543c
|
1.0
|
rootfs/node-fencing: vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go; 3 LoC -
Found a possible issue in [rootfs/node-fencing](https://www.github.com/rootfs/node-fencing) at [vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go#L573-L575)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to ds at line 574 may start a goroutine
[Click here to see the code in its original context.](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/node/nodecontroller_test.go#L573-L575)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, ds := range item.daemonSets {
nodeController.daemonSetInformer.Informer().GetStore().Add(&ds)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b78deb66758bdffcf65efe25d2894b6a6343543c
|
test
|
rootfs node fencing vendor io kubernetes pkg controller node nodecontroller test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to ds at line may start a goroutine click here to show the line s of go which triggered the analyzer go for ds range item daemonsets nodecontroller daemonsetinformer informer getstore add ds leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
163,751
| 25,866,937,801
|
IssuesEvent
|
2022-12-13 21:48:00
|
EscolaDeSaudePublica/DesignLab
|
https://api.github.com/repos/EscolaDeSaudePublica/DesignLab
|
closed
|
Atualização de Conteúdo | site Felicilab | Manifesto
|
Site PROJ: Felicilab Prioridade Design: Alta
|
## **Objetivo**
**Como** designer
**Quero** atualizar o novo site do Felicilab com conteúdos desenvolvidos pelas narrativas
**Para** lançá-lo até o fim do defeso eleitoral
## **Contexto**
O novo site do Felicilab está em sua fase final de desenvolvimento. Com isso, novas seções foram pensadas e o conteúdo de cada uma delas está em desenvolvimento pelo time de narrativas. Assim que cada conteúdo for entregue, a seção do site precisa ser atualizada e disponibilizada.
## **Escopo**
O Manifesto do Felicilab já existe? Faz sentido ser produzido para esta primeira versão? Qual texto assumiria esse papel de apresentação do Felicilab e que, de repente, possa ser repetido na Home, junto ao vídeo do Felicilab?
|
1.0
|
Atualização de Conteúdo | site Felicilab | Manifesto - ## **Objetivo**
**Como** designer
**Quero** atualizar o novo site do Felicilab com conteúdos desenvolvidos pelas narrativas
**Para** lançá-lo até o fim do defeso eleitoral
## **Contexto**
O novo site do Felicilab está em sua fase final de desenvolvimento. Com isso, novas seções foram pensadas e o conteúdo de cada uma delas está em desenvolvimento pelo time de narrativas. Assim que cada conteúdo for entregue, a seção do site precisa ser atualizada e disponibilizada.
## **Escopo**
O Manifesto do Felicilab já existe? Faz sentido ser produzido para esta primeira versão? Qual texto assumiria esse papel de apresentação do Felicilab e que, de repente, possa ser repetido na Home, junto ao vídeo do Felicilab?
|
non_test
|
atualização de conteúdo site felicilab manifesto objetivo como designer quero atualizar o novo site do felicilab com conteúdos desenvolvidos pelas narrativas para lançá lo até o fim do defeso eleitoral contexto o novo site do felicilab está em sua fase final de desenvolvimento com isso novas seções foram pensadas e o conteúdo de cada uma delas está em desenvolvimento pelo time de narrativas assim que cada conteúdo for entregue a seção do site precisa ser atualizada e disponibilizada escopo o manifesto do felicilab já existe faz sentido ser produzido para esta primeira versão qual texto assumiria esse papel de apresentação do felicilab e que de repente possa ser repetido na home junto ao vídeo do felicilab
| 0
|
242,481
| 20,251,133,479
|
IssuesEvent
|
2022-02-14 18:00:08
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql/pgwire: TestCancelRequest failed
|
C-test-failure O-robot branch-master T-sql-experience
|
sql/pgwire.TestCancelRequest [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4369940&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4369940&tab=artifacts#/) on master @ [bbb473c8f304ac20fec51ff0a0d04e128383bcf6](https://github.com/cockroachdb/cockroach/commits/bbb473c8f304ac20fec51ff0a0d04e128383bcf6):
```
=== RUN TestCancelRequest
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/84da2dc75b6a4f34c16c3c89ac1b1c42/logTestCancelRequest1747917159
test_log_scope.go:80: use -show-logs to present logs inline
=== CONT TestCancelRequest
pgwire_test.go:1957: -- test log scope end --
--- FAIL: TestCancelRequest (0.58s)
=== RUN TestCancelRequest/insecure=true
pgwire_test.go:1954: expected 1 cancel request, got 0
--- FAIL: TestCancelRequest/insecure=true (0.21s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCancelRequest.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
sql/pgwire: TestCancelRequest failed - sql/pgwire.TestCancelRequest [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4369940&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4369940&tab=artifacts#/) on master @ [bbb473c8f304ac20fec51ff0a0d04e128383bcf6](https://github.com/cockroachdb/cockroach/commits/bbb473c8f304ac20fec51ff0a0d04e128383bcf6):
```
=== RUN TestCancelRequest
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/84da2dc75b6a4f34c16c3c89ac1b1c42/logTestCancelRequest1747917159
test_log_scope.go:80: use -show-logs to present logs inline
=== CONT TestCancelRequest
pgwire_test.go:1957: -- test log scope end --
--- FAIL: TestCancelRequest (0.58s)
=== RUN TestCancelRequest/insecure=true
pgwire_test.go:1954: expected 1 cancel request, got 0
--- FAIL: TestCancelRequest/insecure=true (0.21s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCancelRequest.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
sql pgwire testcancelrequest failed sql pgwire testcancelrequest with on master run testcancelrequest test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline cont testcancelrequest pgwire test go test log scope end fail testcancelrequest run testcancelrequest insecure true pgwire test go expected cancel request got fail testcancelrequest insecure true help see also parameters in this failure tags bazel gss cc cockroachdb sql experience
| 1
|
3,578
| 4,417,039,966
|
IssuesEvent
|
2016-08-15 01:36:54
|
arm-hpc/ohpc
|
https://api.github.com/repos/arm-hpc/ohpc
|
closed
|
Script to adjust _service files
|
infrastructure
|
_service files have hardcoded branch and github URLs in them for each component. We need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes.
|
1.0
|
Script to adjust _service files - _service files have hardcoded branch and github URLs in them for each component. We need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes.
|
non_test
|
script to adjust service files service files have hardcoded branch and github urls in them for each component we need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes
| 0
|
443,958
| 12,804,122,092
|
IssuesEvent
|
2020-07-03 03:20:25
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
Bigquery component should support only export to table
|
area/components kind/bug lifecycle/stale priority/p2
|
It requires to output to GCS path today. We need to allow only output to a table.
|
1.0
|
Bigquery component should support only export to table - It requires to output to GCS path today. We need to allow only output to a table.
|
non_test
|
bigquery component should support only export to table it requires to output to gcs path today we need to allow only output to a table
| 0
|
576,760
| 17,093,938,161
|
IssuesEvent
|
2021-07-08 21:45:28
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
closed
|
Rename "digital wallets" for consistency
|
component: grouped-settings priority: low
|
## Description
The "Digital wallets" section has been renamed to "Express checkouts" in the copy, but not on the codebase.
Rename its references to "payment request" where possible.
To recap, as examples:
- Component names like `DigitalWallets` should be renamed to `PaymentRequest`
- Directory names like `digital-wallets` should be renamed to `payment-request`
- URL/attribute names like `digital_wallets` should be renamed to `payment_request`
- Action types like `updateDigitalWallets*` should be renamed to `updatePaymentRequest*`
- Setting names like `digital_wallets_button_type`/`digital_wallets_button_size`/`digital_wallets_button_theme`/`is_digital_wallets_enabled`/`digital_wallets_enabled_locations` should be renamed to be closer to the backend (`payment_request_button_locations`/`payment_request_button_type`/`payment_request_button_theme`/etc.)
There should be no need for a migration, since all the settings are already saved in the backend with names like `payment_request*`.
The team has settled to leverage the same lingo as Stripe's "payment request", rather than using the copy's naming.
|
1.0
|
Rename "digital wallets" for consistency - ## Description
The "Digital wallets" section has been renamed to "Express checkouts" in the copy, but not on the codebase.
Rename its references to "payment request" where possible.
To recap, as examples:
- Component names like `DigitalWallets` should be renamed to `PaymentRequest`
- Directory names like `digital-wallets` should be renamed to `payment-request`
- URL/attribute names like `digital_wallets` should be renamed to `payment_request`
- Action types like `updateDigitalWallets*` should be renamed to `updatePaymentRequest*`
- Setting names like `digital_wallets_button_type`/`digital_wallets_button_size`/`digital_wallets_button_theme`/`is_digital_wallets_enabled`/`digital_wallets_enabled_locations` should be renamed to be closer to the backend (`payment_request_button_locations`/`payment_request_button_type`/`payment_request_button_theme`/etc.)
There should be no need for a migration, since all the settings are already saved in the backend with names like `payment_request*`.
The team has settled to leverage the same lingo as Stripe's "payment request", rather than using the copy's naming.
|
non_test
|
rename digital wallets for consistency description the digital wallets section has been renamed to express checkouts in the copy but not on the codebase rename its references to payment request where possible to recap as examples component names like digitalwallets should be renamed to paymentrequest directory names like digital wallets should be renamed to payment request url attribute names like digital wallets should be renamed to payment request action types like updatedigitalwallets should be renamed to updatepaymentrequest setting names like digital wallets button type digital wallets button size digital wallets button theme is digital wallets enabled digital wallets enabled locations should be renamed to be closer to the backend payment request button locations payment request button type payment request button theme etc there should be no need for a migration since all the settings are already saved in the backend with names like payment request the team has settled to leverage the same lingo as stripe s payment request rather than using the copy s naming
| 0
|
287,249
| 8,806,086,766
|
IssuesEvent
|
2018-12-27 01:00:30
|
HealthRex/CDSS
|
https://api.github.com/repos/HealthRex/CDSS
|
closed
|
IRB for identified data-mining (particularly patient notes, etc.)
|
Priority - 2 Medium
|
We currently mainly work with deidentified inpatient data. Expanding to other contexts that include things like patient notes requires IRB approval for access to identified data
|
1.0
|
IRB for identified data-mining (particularly patient notes, etc.) - We currently mainly work with deidentified inpatient data. Expanding to other contexts that include things like patient notes requires IRB approval for access to identified data
|
non_test
|
irb for identified data mining particularly patient notes etc we currently mainly work with deidentified inpatient data expanding to other contexts that include things like patient notes requires irb approval for access to identified data
| 0
|
200,769
| 15,148,401,314
|
IssuesEvent
|
2021-02-11 10:33:34
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
opened
|
LangLoadTest_5m_0 crash vmState=0x00000000 Compiled_method=java/lang/invoke/FilterReturnHandle.invokeExact_thunkArchetype_X(I)I
|
test failure
|
LangLoadTest_5m_0
variation: Mode150
JVM_OPTIONS: -XX:+UseCompressedOops
```
07:20:49 ****************************** MACHINE INFO ******************************
07:20:49 uname : AIX p159a02 1 7 00CCB1C24C00 powerpc AIX
07:20:49 cpuCores :
07:20:49 cat_64: /proc/cpuinfo: A file or directory in the path name does not exist.
07:20:49 0
07:20:49 sysArch : 00CCB1C24C00
07:20:49 procArch : powerpc
07:20:49 sysOS : AIX
```
Failing build:
```
08:02:06 openjdk version "1.8.0_292-internal"
08:02:06 OpenJDK Runtime Environment (build 1.8.0_292-internal-202102102307-b01)
08:02:06 Eclipse OpenJ9 VM (build master-4f0a66b51, JRE 1.8.0 AIX ppc64-64-Bit Compressed References 20210210_969 (JIT enabled, AOT enabled)
08:02:06 OpenJ9 - 4f0a66b51
08:02:06 OMR - 0ef62b1a9
08:02:06 JCL - c94b45c5 based on jdk8u292-b01)
```
Failing job link: https://ci.adoptopenjdk.net/job/Test_openjdk8_j9_extended.system_ppc64_aix/144/consoleFull
Test output at point of failure:
```
08:01:23 LLT 03:01:20.338 - Completed 60.0%. Number of tests started=84029 (+8911)
08:01:41 LLT stderr Unhandled exception
08:01:41 LLT stderr Type=Segmentation error vmState=0x00000000
08:01:41 LLT stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000033
08:01:41 LLT stderr Handler1=09001000A0D1F348 Handler2=09001000A0CF80D8
08:01:41 LLT stderr R0=FFFFFFFFFFFFFFFF R1=0000010023527BF0 R2=0000000000000000 R3=00000000801FD1D0
08:01:41 LLT stderr R4=0000000000000000 R5=00000000FFB91320 R6=0000000000000010 R7=0000000000000002
08:01:41 LLT stderr R8=0000000000000002 R9=000000008037F068 R10=0000000000000002 R11=0000000030488738
08:01:41 LLT stderr R12=00000000802A90D8 R13=0000010023530800 R14=000000003048C920 R15=0000000030472900
08:01:41 LLT stderr R16=0000010021B3F978 R17=FFFFFFFFFFFFFFFF R18=0000010023527E40 R19=0000000080000007
08:01:41 LLT stderr R20=09000000072467AC R21=09001000A0C736D8 R22=0000010023527ED0 R23=0000000080072C18
08:01:41 LLT stderr R24=00000000FF943DA0 R25=00000000FFB91320 R26=00000000FFA4BBB0 R27=0000000000007FFF
08:01:41 LLT stderr R28=00000000FF943DA0 R29=00000000FFA4BBB0 R30=00000000802A9EE8 R31=00000000801FD1D0
08:01:41 LLT stderr IAR=0000010010A0D954 LR=0000010011583F5C MSR=A00000000200D032 CTR=0000010010A0D900
08:01:41 LLT stderr CR=440000844000C400 FPSCR=BA00210000000000 XER=4000C400BA002100
08:01:41 LLT stderr FPR0 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR1 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR2 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR3 43300000000002d0 (f: 720.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR5 4330000000000000 (f: 0.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR6 4086800000000000 (f: 0.000000, d: 7.200000e+02)
08:01:41 LLT stderr FPR7 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR8 4330000000000138 (f: 312.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR9 402da0ba1bf945c0 (f: 469321152.000000, d: 1.481392e+01)
08:01:41 LLT stderr FPR10 412e848000000000 (f: 0.000000, d: 1.000000e+06)
08:01:41 LLT stderr FPR11 43300000000f4240 (f: 1000000.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR12 4530000000000000 (f: 0.000000, d: 1.934281e+25)
08:01:41 LLT stderr FPR13 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR14 3ff0000000000000 (f: 0.000000, d: 1.000000e+00)
08:01:41 LLT stderr FPR15 4010000000000000 (f: 0.000000, d: 4.000000e+00)
08:01:41 LLT stderr FPR16 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR17 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR18 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR19 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR20 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR21 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR22 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR23 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR24 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR25 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR26 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR27 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR28 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR29 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR30 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR31 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr
08:01:41 LLT stderr Compiled_method=java/lang/invoke/FilterReturnHandle.invokeExact_thunkArchetype_X(I)I
08:01:41 LLT stderr Target=2_90_20210210_969 (AIX 7.1)
08:01:41 LLT stderr CPU=ppc64 (16 logical CPUs) (0x200000000 RAM)
08:01:41 LLT stderr ----------- Stack Backtrace -----------
08:01:41 LLT stderr runJavaThread+0x1d4 (0x0900000006FFD638 [libj9vm29.so+0x6e638])
08:01:41 LLT stderr javaProtectedThreadProc+0x11c (0x0900000006F91DA0 [libj9vm29.so+0x2da0])
08:01:41 LLT stderr omrsig_protect+0x488 (0x0900000007278D4C [libj9prt29.so+0x59d4c])
08:01:41 LLT stderr javaThreadProc+0x68 (0x0900000006F91B8C [libj9vm29.so+0x2b8c])
08:01:41 LLT stderr thread_wrapper+0x33c (0x0900000005370760 [libj9thr29.so+0x4760])
08:01:41 LLT stderr _pthread_body+0xf0 (0x0900000000570E14 [libpthread.a+0x3e14])
```
|
1.0
|
LangLoadTest_5m_0 crash vmState=0x00000000 Compiled_method=java/lang/invoke/FilterReturnHandle.invokeExact_thunkArchetype_X(I)I - LangLoadTest_5m_0
variation: Mode150
JVM_OPTIONS: -XX:+UseCompressedOops
```
07:20:49 ****************************** MACHINE INFO ******************************
07:20:49 uname : AIX p159a02 1 7 00CCB1C24C00 powerpc AIX
07:20:49 cpuCores :
07:20:49 cat_64: /proc/cpuinfo: A file or directory in the path name does not exist.
07:20:49 0
07:20:49 sysArch : 00CCB1C24C00
07:20:49 procArch : powerpc
07:20:49 sysOS : AIX
```
Failing build:
```
08:02:06 openjdk version "1.8.0_292-internal"
08:02:06 OpenJDK Runtime Environment (build 1.8.0_292-internal-202102102307-b01)
08:02:06 Eclipse OpenJ9 VM (build master-4f0a66b51, JRE 1.8.0 AIX ppc64-64-Bit Compressed References 20210210_969 (JIT enabled, AOT enabled)
08:02:06 OpenJ9 - 4f0a66b51
08:02:06 OMR - 0ef62b1a9
08:02:06 JCL - c94b45c5 based on jdk8u292-b01)
```
Failing job link: https://ci.adoptopenjdk.net/job/Test_openjdk8_j9_extended.system_ppc64_aix/144/consoleFull
Test output at point of failure:
```
08:01:23 LLT 03:01:20.338 - Completed 60.0%. Number of tests started=84029 (+8911)
08:01:41 LLT stderr Unhandled exception
08:01:41 LLT stderr Type=Segmentation error vmState=0x00000000
08:01:41 LLT stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000033
08:01:41 LLT stderr Handler1=09001000A0D1F348 Handler2=09001000A0CF80D8
08:01:41 LLT stderr R0=FFFFFFFFFFFFFFFF R1=0000010023527BF0 R2=0000000000000000 R3=00000000801FD1D0
08:01:41 LLT stderr R4=0000000000000000 R5=00000000FFB91320 R6=0000000000000010 R7=0000000000000002
08:01:41 LLT stderr R8=0000000000000002 R9=000000008037F068 R10=0000000000000002 R11=0000000030488738
08:01:41 LLT stderr R12=00000000802A90D8 R13=0000010023530800 R14=000000003048C920 R15=0000000030472900
08:01:41 LLT stderr R16=0000010021B3F978 R17=FFFFFFFFFFFFFFFF R18=0000010023527E40 R19=0000000080000007
08:01:41 LLT stderr R20=09000000072467AC R21=09001000A0C736D8 R22=0000010023527ED0 R23=0000000080072C18
08:01:41 LLT stderr R24=00000000FF943DA0 R25=00000000FFB91320 R26=00000000FFA4BBB0 R27=0000000000007FFF
08:01:41 LLT stderr R28=00000000FF943DA0 R29=00000000FFA4BBB0 R30=00000000802A9EE8 R31=00000000801FD1D0
08:01:41 LLT stderr IAR=0000010010A0D954 LR=0000010011583F5C MSR=A00000000200D032 CTR=0000010010A0D900
08:01:41 LLT stderr CR=440000844000C400 FPSCR=BA00210000000000 XER=4000C400BA002100
08:01:41 LLT stderr FPR0 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR1 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR2 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR3 43300000000002d0 (f: 720.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR5 4330000000000000 (f: 0.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR6 4086800000000000 (f: 0.000000, d: 7.200000e+02)
08:01:41 LLT stderr FPR7 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR8 4330000000000138 (f: 312.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR9 402da0ba1bf945c0 (f: 469321152.000000, d: 1.481392e+01)
08:01:41 LLT stderr FPR10 412e848000000000 (f: 0.000000, d: 1.000000e+06)
08:01:41 LLT stderr FPR11 43300000000f4240 (f: 1000000.000000, d: 4.503600e+15)
08:01:41 LLT stderr FPR12 4530000000000000 (f: 0.000000, d: 1.934281e+25)
08:01:41 LLT stderr FPR13 40dfffc000000000 (f: 0.000000, d: 3.276700e+04)
08:01:41 LLT stderr FPR14 3ff0000000000000 (f: 0.000000, d: 1.000000e+00)
08:01:41 LLT stderr FPR15 4010000000000000 (f: 0.000000, d: 4.000000e+00)
08:01:41 LLT stderr FPR16 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR17 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR18 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR19 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR20 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR21 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR22 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR23 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR24 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR25 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR26 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR27 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR28 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR29 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR30 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr FPR31 0000000000000000 (f: 0.000000, d: 0.000000e+00)
08:01:41 LLT stderr
08:01:41 LLT stderr Compiled_method=java/lang/invoke/FilterReturnHandle.invokeExact_thunkArchetype_X(I)I
08:01:41 LLT stderr Target=2_90_20210210_969 (AIX 7.1)
08:01:41 LLT stderr CPU=ppc64 (16 logical CPUs) (0x200000000 RAM)
08:01:41 LLT stderr ----------- Stack Backtrace -----------
08:01:41 LLT stderr runJavaThread+0x1d4 (0x0900000006FFD638 [libj9vm29.so+0x6e638])
08:01:41 LLT stderr javaProtectedThreadProc+0x11c (0x0900000006F91DA0 [libj9vm29.so+0x2da0])
08:01:41 LLT stderr omrsig_protect+0x488 (0x0900000007278D4C [libj9prt29.so+0x59d4c])
08:01:41 LLT stderr javaThreadProc+0x68 (0x0900000006F91B8C [libj9vm29.so+0x2b8c])
08:01:41 LLT stderr thread_wrapper+0x33c (0x0900000005370760 [libj9thr29.so+0x4760])
08:01:41 LLT stderr _pthread_body+0xf0 (0x0900000000570E14 [libpthread.a+0x3e14])
```
|
test
|
langloadtest crash vmstate compiled method java lang invoke filterreturnhandle invokeexact thunkarchetype x i i langloadtest variation jvm options xx usecompressedoops machine info uname aix powerpc aix cpucores cat proc cpuinfo a file or directory in the path name does not exist sysarch procarch powerpc sysos aix failing build openjdk version internal openjdk runtime environment build internal eclipse vm build master jre aix bit compressed references jit enabled aot enabled omr jcl based on failing job link test output at point of failure llt completed number of tests started llt stderr unhandled exception llt stderr type segmentation error vmstate llt stderr signal number signal number error value signal code llt stderr llt stderr ffffffffffffffff llt stderr llt stderr llt stderr llt stderr ffffffffffffffff llt stderr llt stderr llt stderr llt stderr iar lr msr ctr llt stderr cr fpscr xer llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr f d llt stderr llt stderr compiled method java lang invoke filterreturnhandle invokeexact thunkarchetype x i i llt stderr target aix llt stderr cpu logical cpus ram llt stderr stack backtrace llt stderr runjavathread llt stderr javaprotectedthreadproc llt stderr omrsig protect llt stderr javathreadproc llt stderr thread wrapper llt stderr pthread body
| 1
|
207,391
| 15,813,409,576
|
IssuesEvent
|
2021-04-05 07:37:33
|
IgniteUI/igniteui-angular
|
https://api.github.com/repos/IgniteUI/igniteui-angular
|
closed
|
Theme generates unnecessary image class
|
bug status: awaiting-test themes version: 11.1.x
|
## Description
I have an image with class `image`. When including `igx-theme` with `$default-palette` it sets the max height of the image to 120px.
* igniteui-angular version: 11.1.4
* browser: all
## Steps to reproduce
Open this [Stackblitz ](https://stackblitz.com/edit/image-theme-issue?file=src%2Fstyles.scss) sample. Observe the image
## Result
Image height is limited to 120px because of the added trough theme `image` class.
## Expected result
Image height should not affected.
## Notes
In root `style.scss` if you comment out this line
```
@include igx-theme($default-palette);
```
the issue is gone.
|
1.0
|
Theme generates unnecessary image class - ## Description
I have an image with class `image`. When including `igx-theme` with `$default-palette` it sets the max height of the image to 120px.
* igniteui-angular version: 11.1.4
* browser: all
## Steps to reproduce
Open this [Stackblitz ](https://stackblitz.com/edit/image-theme-issue?file=src%2Fstyles.scss) sample. Observe the image
## Result
Image height is limited to 120px because of the added trough theme `image` class.
## Expected result
Image height should not affected.
## Notes
In root `style.scss` if you comment out this line
```
@include igx-theme($default-palette);
```
the issue is gone.
|
test
|
theme generates unnecessary image class description i have an image with class image when including igx theme with default palette it sets the max height of the image to igniteui angular version browser all steps to reproduce open this sample observe the image result image height is limited to because of the added trough theme image class expected result image height should not affected notes in root style scss if you comment out this line include igx theme default palette the issue is gone
| 1
|
44,419
| 11,440,520,171
|
IssuesEvent
|
2020-02-05 09:49:30
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Integration Test: flaky debug breakpoints
|
broken-build integration-test
|
From https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=66506&view=logs&j=a5e52b91-c83f-5429-4a68-c246fc63a4f7&t=a83823bd-5dbd-5313-d0d6-a425f162cca0
```
1) Debug breakpoints:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7)
```
|
1.0
|
Integration Test: flaky debug breakpoints - From https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=66506&view=logs&j=a5e52b91-c83f-5429-4a68-c246fc63a4f7&t=a83823bd-5dbd-5313-d0d6-a425f162cca0
```
1) Debug breakpoints:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7)
```
|
non_test
|
integration test flaky debug breakpoints from debug breakpoints error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves at listontimeout internal timers js at processtimers internal timers js
| 0
|
156,774
| 12,337,659,555
|
IssuesEvent
|
2020-05-14 15:18:10
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: acceptance/gossip/peerings failed
|
C-test-failure O-roachtest O-robot branch-provisional_202005131648_v19.2.7 release-blocker
|
[(roachtest).acceptance/gossip/peerings failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1941009&tab=buildLog) on [provisional_202005131648_v19.2.7@2e19ff0576ff21e243f00f2e2acdaeea57aee6f3](https://github.com/cockroachdb/cockroach/commits/2e19ff0576ff21e243f00f2e2acdaeea57aee6f3):
```
The test failed on branch=provisional_202005131648_v19.2.7, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/gossip/peerings/run_1
gossip.go:259,acceptance.go:94,test_runner.go:753: failed to get gossip status from node 1: status: 403 Forbidden, content-type: application/json, body: {
(1) attached stack trace
| main.(*gossipUtil).check.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:158
| [...repeated from below...]
Wraps: (2) 2 safe details enclosed
Wraps: (3) failed to get gossip status from node 1
Wraps: (4) attached stack trace
| github.com/cockroachdb/cockroach/pkg/util/httputil.doJSONRequest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/httputil/http.go:116
| github.com/cockroachdb/cockroach/pkg/util/httputil.GetJSON
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/httputil/http.go:55
| main.(*gossipUtil).check.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:157
| github.com/cockroachdb/cockroach/pkg/util/retry.ForDuration
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/retry/retry.go:188
| main.(*gossipUtil).check
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:153
| main.runGossipPeerings
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:258
| main.registerAcceptance.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/acceptance.go:94
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:753
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (5) 5 safe details enclosed
Wraps: (6) status: 403 Forbidden, content-type: application/json, body: {
| "error": "not allowed (due to the 'server.remote_debugging.mode' setting)",
| "message": "not allowed (due to the 'server.remote_debugging.mode' setting)",
| "code": 7,
| "details": [
| ]
| }, error: <nil>
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *withstack.withStack (5) *safedetails.withSafeDetails (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/acceptance/gossip/peerings](https://teamcity.cockroachdb.com/viewLog.html?buildId=1941009&tab=artifacts#/acceptance/gossip/peerings)
Related:
- #48005 roachtest: acceptance/gossip/peerings failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fgossip%2Fpeerings.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: acceptance/gossip/peerings failed - [(roachtest).acceptance/gossip/peerings failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1941009&tab=buildLog) on [provisional_202005131648_v19.2.7@2e19ff0576ff21e243f00f2e2acdaeea57aee6f3](https://github.com/cockroachdb/cockroach/commits/2e19ff0576ff21e243f00f2e2acdaeea57aee6f3):
```
The test failed on branch=provisional_202005131648_v19.2.7, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/gossip/peerings/run_1
gossip.go:259,acceptance.go:94,test_runner.go:753: failed to get gossip status from node 1: status: 403 Forbidden, content-type: application/json, body: {
(1) attached stack trace
| main.(*gossipUtil).check.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:158
| [...repeated from below...]
Wraps: (2) 2 safe details enclosed
Wraps: (3) failed to get gossip status from node 1
Wraps: (4) attached stack trace
| github.com/cockroachdb/cockroach/pkg/util/httputil.doJSONRequest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/httputil/http.go:116
| github.com/cockroachdb/cockroach/pkg/util/httputil.GetJSON
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/httputil/http.go:55
| main.(*gossipUtil).check.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:157
| github.com/cockroachdb/cockroach/pkg/util/retry.ForDuration
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/util/retry/retry.go:188
| main.(*gossipUtil).check
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:153
| main.runGossipPeerings
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/gossip.go:258
| main.registerAcceptance.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/acceptance.go:94
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:753
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (5) 5 safe details enclosed
Wraps: (6) status: 403 Forbidden, content-type: application/json, body: {
| "error": "not allowed (due to the 'server.remote_debugging.mode' setting)",
| "message": "not allowed (due to the 'server.remote_debugging.mode' setting)",
| "code": 7,
| "details": [
| ]
| }, error: <nil>
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *withstack.withStack (5) *safedetails.withSafeDetails (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/acceptance/gossip/peerings](https://teamcity.cockroachdb.com/viewLog.html?buildId=1941009&tab=artifacts#/acceptance/gossip/peerings)
Related:
- #48005 roachtest: acceptance/gossip/peerings failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aacceptance%2Fgossip%2Fpeerings.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest acceptance gossip peerings failed on the test failed on branch provisional cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts acceptance gossip peerings run gossip go acceptance go test runner go failed to get gossip status from node status forbidden content type application json body attached stack trace main gossiputil check home agent work go src github com cockroachdb cockroach pkg cmd roachtest gossip go wraps safe details enclosed wraps failed to get gossip status from node wraps attached stack trace github com cockroachdb cockroach pkg util httputil dojsonrequest home agent work go src github com cockroachdb cockroach pkg util httputil http go github com cockroachdb cockroach pkg util httputil getjson home agent work go src github com cockroachdb cockroach pkg util httputil http go main gossiputil check home agent work go src github com cockroachdb cockroach pkg cmd roachtest gossip go github com cockroachdb cockroach pkg util retry forduration home agent work go src github com cockroachdb cockroach pkg util retry retry go main gossiputil check home agent work go src github com cockroachdb cockroach pkg cmd roachtest gossip go main rungossippeerings home agent work go src github com cockroachdb cockroach pkg cmd roachtest gossip go main registeracceptance home agent work go src github com cockroachdb cockroach pkg cmd roachtest acceptance go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps safe details enclosed wraps status forbidden content type application json body error not allowed due to the server remote debugging mode setting message not allowed due to the server remote debugging mode setting code details error error types withstack withstack safedetails withsafedetails errutil withmessage withstack withstack safedetails withsafedetails errors errorstring more artifacts related roachtest acceptance gossip peerings failed powered by
| 1
|
71,034
| 7,227,855,003
|
IssuesEvent
|
2018-02-11 01:39:45
|
alisd23/ngrx-form
|
https://api.github.com/repos/alisd23/ngrx-form
|
closed
|
Write unit tests
|
testing
|
Add tests for:
- [x] Reducers/actions
- [x] Form directive
- [x] Field directive (checkbox/radio/default)
|
1.0
|
Write unit tests - Add tests for:
- [x] Reducers/actions
- [x] Form directive
- [x] Field directive (checkbox/radio/default)
|
test
|
write unit tests add tests for reducers actions form directive field directive checkbox radio default
| 1
|
44,662
| 12,311,932,932
|
IssuesEvent
|
2020-05-12 13:13:07
|
STEllAR-GROUP/phylanx
|
https://api.github.com/repos/STEllAR-GROUP/phylanx
|
closed
|
`astype` does not work with `define`
|
category: PhySL category: primitives type: defect
|
Although `astype([1, 0, 1], "bool")` works properly, the following code (in physl) fails in an assertion:
```c++
define(a, [1, 0, 1])
astype(a, "bool")
```
```console
{version}: V1.5.0-trunk (AGAS: V3.0), Git: 6df8ceca6a
{boost}: V1.72.0
{build-type}: debug
{date}: Apr 27 2020 22:02:32
{platform}: Win32
{compiler}: Microsoft Visual C++ version 14.1
{stdlib}: Dinkumware standard library version 650
{env}: 51 entries:
ALLUSERSPROFILE=C:\ProgramData
APPDATA=C:\Users\Bita\AppData\Roaming
COMPUTERNAME=UNICK
ComSpec=C:\Windows\system32\cmd.exe
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramW6432=C:\Program Files\Common Files
DriverData=C:\Windows\System32\Drivers\DriverData
FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer
FPS_BROWSER_USER_PROFILE_STRING=Default
HOMEDRIVE=C:
HOMEPATH=\Users\Bita
LOCALAPPDATA=C:\Users\Bita\AppData\Local
LOGONSERVER=\\UNICK
MSBuildLoadMicrosoftTargetsReadOnly=true
MSMPI_BIN=C:\Program Files\Microsoft MPI\Bin\
NUMBER_OF_PROCESSORS=48
OS=Windows_NT
OneDrive=C:\Users\Bita\OneDrive
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 85 Stepping 4, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=5504
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
PUBLIC=C:\Users\Public
PYTHONPATH=C:/Repos/phylanx/cmake-build-debug/python/build/lib.win-amd64-3.6
Path=C:\ProgramData\DockerDesktop\version-bin;C:\Program Files\Docker\Docker\Resources\bin;C:\Program Files\Microsoft MPI\Bin\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\iCLS\;C:\Program Files\Intel\Intel(R) Management Engine Components\iCLS\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Git\cmd;C:\Repos\vcpkg\downloads\tools\cmake-3.14.0-windows\cmake-3.14.0-win32-x86\bin;C:\local\SysinternalsSuite;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Scripts;C:\Program Files\LLVM\bin;C:\Program Files\TortoiseGit\bin;C:\Users\Bita\AppData\Local\Microsoft\WindowsApps;
PkgDefApplicationConfigFile=C:\Users\Bita\AppData\Local\Microsoft\VisualStudio\15.0_e0d0c2e3\devenv.exe.config
ProgramData=C:\ProgramData
ProgramFiles(x86)=C:\Program Files (x86)
ProgramFiles=C:\Program Files
ProgramW6432=C:\Program Files
SESSIONNAME=Console
SystemDrive=C:
SystemRoot=C:\Windows
TEMP=C:\Users\Bita\AppData\Local\Temp
TMP=C:\Users\Bita\AppData\Local\Temp
USERDOMAIN=UNICK
USERDOMAIN_ROAMINGPROFILE=UNICK
USERNAME=Bita
USERPROFILE=C:\Users\Bita
VSAPPIDDIR=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\
VSAPPIDNAME=devenv.exe
VSLANG=1033
VSSKUEDITION=Community
VisualStudioDir=C:\Users\Bita\Documents\Visual Studio 2017
VisualStudioEdition=Microsoft Visual Studio Community 2017
VisualStudioVersion=15.0
_PTVS_PID=14436
windir=C:\Windows
{stack-trace}: 20 frames:
00007FFA3CEFBA17: hpx::util::stack_trace::trace +0x37
00007FFA3C3D9599: hpx::util::backtrace::backtrace +0x99
00007FFA3C3B9D38: hpx::util::trace_on_new_stack +0x88
00007FFA3C3B7F4F: hpx::detail::custom_exception_info +0x7f
00007FFA3C42C789: std::_Invoker_functor::_Call<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb9
00007FFA3C44EE78: std::invoke<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb8
00007FFA3C42C678: std::_Invoker_ret<hpx::exception_info,0>::_Call<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb8
00007FFA3C47C797: std::_Func_impl_no_alloc<hpx::exception_info (__cdecl*)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),hpx::exception_info,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &>::_Do_call +0xa7
00007FFA3CF094FB: std::_Func_class<hpx::exception_info,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &>::operator() +0xdb
00007FFA3CF025F6: hpx::detail::construct_custom_exception<hpx::exception> +0xf6
00007FFA3CF06ED4: hpx::detail::get_exception<hpx::exception> +0xa4
00007FFA3CD4C779: hpx::detail::assertion_handler +0x1c9
00007FFA3CE3FBD4: hpx::assertion::detail::handle_assert +0xf4
00007FFA3E7CE71B: phylanx::execution_tree::eval_context::get_var +0xab
00007FFA3E7CFC7B: phylanx::execution_tree::primitives::access_variable::eval +0x7b
00007FFA3F3E8719: phylanx::execution_tree::primitives::primitive_component_base::do_eval +0x159
00007FFA3F3B59FD: phylanx::execution_tree::primitives::primitive_component::eval +0x1ed
00007FFA3E94D150: hpx::actions::detail::component_invoke<phylanx::execution_tree::primitives::primitive_component const ,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type>,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> __cdecl(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context)const ,std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0xf0
00007FFA3E98F5BD: hpx::actions::action<hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> (__cdecl phylanx::execution_tree::primitives::primitive_component::*)(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context)const ,&phylanx::execution_tree::primitives::primitive_component::eval,phylanx::execution_tree::primitives::primitive_component::eval_action>::invoke<std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0xbd
00007FFA3E9A27FB: hpx::actions::basic_action<phylanx::execution_tree::primitives::primitive_component const ,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> __cdecl(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context),phylanx::execution_tree::primitives::primitive_component::eval_action>::invoker_impl<std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0x7b
{locality-id}: 0
{hostname}: [ ]
{process-id}: 7220
{os-thread}: 0, worker-thread#0
{thread-id}: 000000004ba7b350
{thread-description}: hpx_main
{state}: state_running
{auxinfo}:
{file}: c:\repos\phylanx\phylanx\execution_tree\primitives\primitive_argument_type.hpp
{line}: 172
{function}: struct phylanx::execution_tree::primitive_argument_type *__cdecl phylanx::execution_tree::eval_context::get_var(const struct phylanx::util::hashed_string &) noexcept
{what}: Assertion 'bool(variables_)' failed: HPX(assertion_failure)
```
|
1.0
|
`astype` does not work with `define` - Although `astype([1, 0, 1], "bool")` works properly, the following code (in physl) fails in an assertion:
```c++
define(a, [1, 0, 1])
astype(a, "bool")
```
```console
{version}: V1.5.0-trunk (AGAS: V3.0), Git: 6df8ceca6a
{boost}: V1.72.0
{build-type}: debug
{date}: Apr 27 2020 22:02:32
{platform}: Win32
{compiler}: Microsoft Visual C++ version 14.1
{stdlib}: Dinkumware standard library version 650
{env}: 51 entries:
ALLUSERSPROFILE=C:\ProgramData
APPDATA=C:\Users\Bita\AppData\Roaming
COMPUTERNAME=UNICK
ComSpec=C:\Windows\system32\cmd.exe
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
CommonProgramFiles=C:\Program Files\Common Files
CommonProgramW6432=C:\Program Files\Common Files
DriverData=C:\Windows\System32\Drivers\DriverData
FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer
FPS_BROWSER_USER_PROFILE_STRING=Default
HOMEDRIVE=C:
HOMEPATH=\Users\Bita
LOCALAPPDATA=C:\Users\Bita\AppData\Local
LOGONSERVER=\\UNICK
MSBuildLoadMicrosoftTargetsReadOnly=true
MSMPI_BIN=C:\Program Files\Microsoft MPI\Bin\
NUMBER_OF_PROCESSORS=48
OS=Windows_NT
OneDrive=C:\Users\Bita\OneDrive
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 85 Stepping 4, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=5504
PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules
PUBLIC=C:\Users\Public
PYTHONPATH=C:/Repos/phylanx/cmake-build-debug/python/build/lib.win-amd64-3.6
Path=C:\ProgramData\DockerDesktop\version-bin;C:\Program Files\Docker\Docker\Resources\bin;C:\Program Files\Microsoft MPI\Bin\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\iCLS\;C:\Program Files\Intel\Intel(R) Management Engine Components\iCLS\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Git\cmd;C:\Repos\vcpkg\downloads\tools\cmake-3.14.0-windows\cmake-3.14.0-win32-x86\bin;C:\local\SysinternalsSuite;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Scripts;C:\Program Files\LLVM\bin;C:\Program Files\TortoiseGit\bin;C:\Users\Bita\AppData\Local\Microsoft\WindowsApps;
PkgDefApplicationConfigFile=C:\Users\Bita\AppData\Local\Microsoft\VisualStudio\15.0_e0d0c2e3\devenv.exe.config
ProgramData=C:\ProgramData
ProgramFiles(x86)=C:\Program Files (x86)
ProgramFiles=C:\Program Files
ProgramW6432=C:\Program Files
SESSIONNAME=Console
SystemDrive=C:
SystemRoot=C:\Windows
TEMP=C:\Users\Bita\AppData\Local\Temp
TMP=C:\Users\Bita\AppData\Local\Temp
USERDOMAIN=UNICK
USERDOMAIN_ROAMINGPROFILE=UNICK
USERNAME=Bita
USERPROFILE=C:\Users\Bita
VSAPPIDDIR=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\
VSAPPIDNAME=devenv.exe
VSLANG=1033
VSSKUEDITION=Community
VisualStudioDir=C:\Users\Bita\Documents\Visual Studio 2017
VisualStudioEdition=Microsoft Visual Studio Community 2017
VisualStudioVersion=15.0
_PTVS_PID=14436
windir=C:\Windows
{stack-trace}: 20 frames:
00007FFA3CEFBA17: hpx::util::stack_trace::trace +0x37
00007FFA3C3D9599: hpx::util::backtrace::backtrace +0x99
00007FFA3C3B9D38: hpx::util::trace_on_new_stack +0x88
00007FFA3C3B7F4F: hpx::detail::custom_exception_info +0x7f
00007FFA3C42C789: std::_Invoker_functor::_Call<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb9
00007FFA3C44EE78: std::invoke<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb8
00007FFA3C42C678: std::_Invoker_ret<hpx::exception_info,0>::_Call<hpx::exception_info (__cdecl*&)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &> +0xb8
00007FFA3C47C797: std::_Func_impl_no_alloc<hpx::exception_info (__cdecl*)(std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &),hpx::exception_info,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &>::_Do_call +0xa7
00007FFA3CF094FB: std::_Func_class<hpx::exception_info,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &,long,std::basic_string<char,std::char_traits<char>,std::allocator<char> > const &>::operator() +0xdb
00007FFA3CF025F6: hpx::detail::construct_custom_exception<hpx::exception> +0xf6
00007FFA3CF06ED4: hpx::detail::get_exception<hpx::exception> +0xa4
00007FFA3CD4C779: hpx::detail::assertion_handler +0x1c9
00007FFA3CE3FBD4: hpx::assertion::detail::handle_assert +0xf4
00007FFA3E7CE71B: phylanx::execution_tree::eval_context::get_var +0xab
00007FFA3E7CFC7B: phylanx::execution_tree::primitives::access_variable::eval +0x7b
00007FFA3F3E8719: phylanx::execution_tree::primitives::primitive_component_base::do_eval +0x159
00007FFA3F3B59FD: phylanx::execution_tree::primitives::primitive_component::eval +0x1ed
00007FFA3E94D150: hpx::actions::detail::component_invoke<phylanx::execution_tree::primitives::primitive_component const ,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type>,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> __cdecl(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context)const ,std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0xf0
00007FFA3E98F5BD: hpx::actions::action<hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> (__cdecl phylanx::execution_tree::primitives::primitive_component::*)(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context)const ,&phylanx::execution_tree::primitives::primitive_component::eval,phylanx::execution_tree::primitives::primitive_component::eval_action>::invoke<std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0xbd
00007FFA3E9A27FB: hpx::actions::basic_action<phylanx::execution_tree::primitives::primitive_component const ,hpx::lcos::future<phylanx::execution_tree::primitive_argument_type> __cdecl(std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context),phylanx::execution_tree::primitives::primitive_component::eval_action>::invoker_impl<std::vector<phylanx::execution_tree::primitive_argument_type,std::allocator<phylanx::execution_tree::primitive_argument_type> > const &,phylanx::execution_tree::eval_context> +0x7b
{locality-id}: 0
{hostname}: [ ]
{process-id}: 7220
{os-thread}: 0, worker-thread#0
{thread-id}: 000000004ba7b350
{thread-description}: hpx_main
{state}: state_running
{auxinfo}:
{file}: c:\repos\phylanx\phylanx\execution_tree\primitives\primitive_argument_type.hpp
{line}: 172
{function}: struct phylanx::execution_tree::primitive_argument_type *__cdecl phylanx::execution_tree::eval_context::get_var(const struct phylanx::util::hashed_string &) noexcept
{what}: Assertion 'bool(variables_)' failed: HPX(assertion_failure)
```
|
non_test
|
astype does not work with define although astype bool works properly the following code in physl fails in an assertion c define a astype a bool console version trunk agas git boost build type debug date apr platform compiler microsoft visual c version stdlib dinkumware standard library version env entries allusersprofile c programdata appdata c users bita appdata roaming computername unick comspec c windows cmd exe commonprogramfiles c program files common files commonprogramfiles c program files common files c program files common files driverdata c windows drivers driverdata fps browser app profile string internet explorer fps browser user profile string default homedrive c homepath users bita localappdata c users bita appdata local logonserver unick msbuildloadmicrosofttargetsreadonly true msmpi bin c program files microsoft mpi bin number of processors os windows nt onedrive c users bita onedrive pathext com exe bat cmd vbs vbe js jse wsf wsh msc processor architecture processor identifier family model stepping genuineintel processor level processor revision psmodulepath c program files windowspowershell modules c windows windowspowershell modules public c users public pythonpath c repos phylanx cmake build debug python build lib win path c programdata dockerdesktop version bin c program files docker docker resources bin c program files microsoft mpi bin c program files intel intel r management engine components icls c program files intel intel r management engine components icls c windows c windows c windows wbem c windows windowspowershell c windows openssh c program files intel intel r management engine components dal c program files intel intel r management engine components dal c program files git cmd c repos vcpkg downloads tools cmake windows cmake bin c local sysinternalssuite c program files microsoft visual studio shared c program files microsoft visual studio shared scripts c program files llvm bin c program files tortoisegit bin c users bita appdata local microsoft windowsapps pkgdefapplicationconfigfile c users bita appdata local microsoft visualstudio devenv exe config programdata c programdata programfiles c program files programfiles c program files c program files sessionname console systemdrive c systemroot c windows temp c users bita appdata local temp tmp c users bita appdata local temp userdomain unick userdomain roamingprofile unick username bita userprofile c users bita vsappiddir c program files microsoft visual studio community ide vsappidname devenv exe vslang vsskuedition community visualstudiodir c users bita documents visual studio visualstudioedition microsoft visual studio community visualstudioversion ptvs pid windir c windows stack trace frames hpx util stack trace trace hpx util backtrace backtrace hpx util trace on new stack hpx detail custom exception info std invoker functor call std allocator const std basic string std allocator const long std basic string std allocator const std basic string std allocator const std basic string std allocator const long std basic string std allocator const std invoke std allocator const std basic string std allocator const long std basic string std allocator const std basic string std allocator const std basic string std allocator const long std basic string std allocator const std invoker ret call std allocator const std basic string std allocator const long std basic string std allocator const std basic string std allocator const std basic string std allocator const long std basic string std allocator const std func impl no alloc std allocator const std basic string std allocator const long std basic string std allocator const hpx exception info std basic string std allocator const std basic string std allocator const long std basic string std allocator const do call std func class std allocator const std basic string std allocator const long std basic string std allocator const operator hpx detail construct custom exception hpx detail get exception hpx detail assertion handler hpx assertion detail handle assert phylanx execution tree eval context get var phylanx execution tree primitives access variable eval phylanx execution tree primitives primitive component base do eval phylanx execution tree primitives primitive component eval hpx actions detail component invoke hpx lcos future cdecl std vector const phylanx execution tree eval context const std vector const phylanx execution tree eval context hpx actions action cdecl phylanx execution tree primitives primitive component std vector const phylanx execution tree eval context const phylanx execution tree primitives primitive component eval phylanx execution tree primitives primitive component eval action invoke const phylanx execution tree eval context hpx actions basic action cdecl std vector const phylanx execution tree eval context phylanx execution tree primitives primitive component eval action invoker impl const phylanx execution tree eval context locality id hostname process id os thread worker thread thread id thread description hpx main state state running auxinfo file c repos phylanx phylanx execution tree primitives primitive argument type hpp line function struct phylanx execution tree primitive argument type cdecl phylanx execution tree eval context get var const struct phylanx util hashed string noexcept what assertion bool variables failed hpx assertion failure
| 0
|
779,738
| 27,364,552,132
|
IssuesEvent
|
2023-02-27 18:07:59
|
Lightning-AI/lightning
|
https://api.github.com/repos/Lightning-AI/lightning
|
closed
|
`CUDAAccelerator` can not run on your system since the accelerator is not available.
|
bug priority: 0 3rd party
|
### Bug description
So, in my environment `torch.cuda.is_available()` is `True` but `torch.cuda.device_count()` is `0`. This issue is probably linked with a [pytorch issue](https://github.com/pytorch/pytorch/issues/90543). Since I was planning on using lightning for a new project, I am unable to use GPU using the `pl.Trainer(accelerator='cuda', devices=1)`.
Not sure if this is a bug on your end. Any suggestion to go about this would be great.
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 1.10):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @tchaton
|
1.0
|
`CUDAAccelerator` can not run on your system since the accelerator is not available. - ### Bug description
So, in my environment `torch.cuda.is_available()` is `True` but `torch.cuda.device_count()` is `0`. This issue is probably linked with a [pytorch issue](https://github.com/pytorch/pytorch/issues/90543). Since I was planning on using lightning for a new project, I am unable to use GPU using the `pl.Trainer(accelerator='cuda', devices=1)`.
Not sure if this is a bug on your end. Any suggestion to go about this would be great.
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 1.10):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @tchaton
|
non_test
|
cudaaccelerator can not run on your system since the accelerator is not available bug description so in my environment torch cuda is available is true but torch cuda device count is this issue is probably linked with a since i was planning on using lightning for a new project i am unable to use gpu using the pl trainer accelerator cuda devices not sure if this is a bug on your end any suggestion to go about this would be great how to reproduce the bug no response error messages and logs error messages and logs here please environment current environment lightning component e g trainer lightningmodule lightningapp lightningwork lightningflow pytorch lightning version e g lightning app version e g pytorch version e g python version e g os e g linux cuda cudnn version gpu models and configuration how you installed lightning conda pip source running environment of lightningapp e g local cloud more info no response cc tchaton
| 0
|
22,246
| 3,946,618,257
|
IssuesEvent
|
2016-04-28 05:52:53
|
skatejs/skatejs
|
https://api.github.com/repos/skatejs/skatejs
|
closed
|
Sauce seems to be serving Safari 7.0 when we say we want 7.1
|
testing
|
### MAR 18, 2016 | 11:21PM PDT
Dylan Lacey replied:
G’day Joscha;
That’s very odd; I couldn’t find a release ticket for Safari 7 and it would be extremely unusual for us to go back a version without some major bugs we’re working around.
You say “The last two days”; Have you been being served 7.0.6 for every test over those days? How many test have you attempted? I’m wondering if somehow there’s a stale image lurking somewhere serving janky old Safari.
Cheers,
Dylan.
### MAR 18, 2016 | 09:52PM PDT
Original message
skatejs wrote:
Up until recently, tests asking for Safari 7 were run with:
Safari 7.1.7 (Mac OS X 10.9.5)
See: https://travis-ci.org/skatejs/skatejs/builds/115761911
Since two days ago tests seemt o be running with:
Safari 7.0.6 (Mac OS X 10.9.5)
See: https://travis-ci.org/skatejs/skatejs/builds/116769605
which now makes our builds fail. I tried enforcing Safari 7.1.7. by passing "7.1" as version string, but it wasn't recognized.
Cheers,
Joscha
### MAR 24, 2016 | 05:13PM PDT
skatejs replied:
Yes, correct. Before that we seemed to always get the 7.1 version, then suddenly Saucelabs started providing 7.0.6 (see the Travis builds of the master branch, they are all public) and our builds broke.
### MAR 24, 2016 | 09:47AM PDT
Matt Dunn replied:
Hi,
We updated our images recently with our new branding. When we were building a new 10.9 image, it applied the latest OS updates, which included an upgrade from Safari 7 to Safari 9. We fixed that, but I imagine that in fixing it, the Safari version probably changed from 7.1.7 to 7.0.6.
I what way is this breaking your build? Are you able to update your build to work with 7.0.6 or do you particularly need 7.1.7?
Thanks,
Matt
### MAR 28, 2016 | 03:30PM PDT
skatejs replied:
Hi Matt,
yes, this is what is breaking our build, because there are some fixes between 7.0.6 and 7.1.7 that allow you to use Object.defineProperty on event objects. In 7.0.6 this is not possible. We couldn't find a workaround and as this is our production build that suddenly broke, we had to disable Safari 7, which is quite unfortunate. Given that 7.0.6 is from 2014 and probably not a version that is very widely spread, as security updates are available all throughout 2015, can't you add the security fixes to that image?
### MAR 29, 2016 | 03:39AM PDT
Matt Dunn replied:
Thanks for the info. I’ve requested that we rectify this in our Safari 7 image. I’ll put this case on hold for now and get back to you once I have an update.
Regards,
Matt
|
1.0
|
Sauce seems to be serving Safari 7.0 when we say we want 7.1 - ### MAR 18, 2016 | 11:21PM PDT
Dylan Lacey replied:
G’day Joscha;
That’s very odd; I couldn’t find a release ticket for Safari 7 and it would be extremely unusual for us to go back a version without some major bugs we’re working around.
You say “The last two days”; Have you been being served 7.0.6 for every test over those days? How many test have you attempted? I’m wondering if somehow there’s a stale image lurking somewhere serving janky old Safari.
Cheers,
Dylan.
### MAR 18, 2016 | 09:52PM PDT
Original message
skatejs wrote:
Up until recently, tests asking for Safari 7 were run with:
Safari 7.1.7 (Mac OS X 10.9.5)
See: https://travis-ci.org/skatejs/skatejs/builds/115761911
Since two days ago tests seemt o be running with:
Safari 7.0.6 (Mac OS X 10.9.5)
See: https://travis-ci.org/skatejs/skatejs/builds/116769605
which now makes our builds fail. I tried enforcing Safari 7.1.7. by passing "7.1" as version string, but it wasn't recognized.
Cheers,
Joscha
### MAR 24, 2016 | 05:13PM PDT
skatejs replied:
Yes, correct. Before that we seemed to always get the 7.1 version, then suddenly Saucelabs started providing 7.0.6 (see the Travis builds of the master branch, they are all public) and our builds broke.
### MAR 24, 2016 | 09:47AM PDT
Matt Dunn replied:
Hi,
We updated our images recently with our new branding. When we were building a new 10.9 image, it applied the latest OS updates, which included an upgrade from Safari 7 to Safari 9. We fixed that, but I imagine that in fixing it, the Safari version probably changed from 7.1.7 to 7.0.6.
I what way is this breaking your build? Are you able to update your build to work with 7.0.6 or do you particularly need 7.1.7?
Thanks,
Matt
### MAR 28, 2016 | 03:30PM PDT
skatejs replied:
Hi Matt,
yes, this is what is breaking our build, because there are some fixes between 7.0.6 and 7.1.7 that allow you to use Object.defineProperty on event objects. In 7.0.6 this is not possible. We couldn't find a workaround and as this is our production build that suddenly broke, we had to disable Safari 7, which is quite unfortunate. Given that 7.0.6 is from 2014 and probably not a version that is very widely spread, as security updates are available all throughout 2015, can't you add the security fixes to that image?
### MAR 29, 2016 | 03:39AM PDT
Matt Dunn replied:
Thanks for the info. I’ve requested that we rectify this in our Safari 7 image. I’ll put this case on hold for now and get back to you once I have an update.
Regards,
Matt
|
test
|
sauce seems to be serving safari when we say we want mar pdt dylan lacey replied g’day joscha that’s very odd i couldn’t find a release ticket for safari and it would be extremely unusual for us to go back a version without some major bugs we’re working around you say “the last two days” have you been being served for every test over those days how many test have you attempted i’m wondering if somehow there’s a stale image lurking somewhere serving janky old safari cheers dylan mar pdt original message skatejs wrote up until recently tests asking for safari were run with safari mac os x see since two days ago tests seemt o be running with safari mac os x see which now makes our builds fail i tried enforcing safari by passing as version string but it wasn t recognized cheers joscha mar pdt skatejs replied yes correct before that we seemed to always get the version then suddenly saucelabs started providing see the travis builds of the master branch they are all public and our builds broke mar pdt matt dunn replied hi we updated our images recently with our new branding when we were building a new image it applied the latest os updates which included an upgrade from safari to safari we fixed that but i imagine that in fixing it the safari version probably changed from to i what way is this breaking your build are you able to update your build to work with or do you particularly need thanks matt mar pdt skatejs replied hi matt yes this is what is breaking our build because there are some fixes between and that allow you to use object defineproperty on event objects in this is not possible we couldn t find a workaround and as this is our production build that suddenly broke we had to disable safari which is quite unfortunate given that is from and probably not a version that is very widely spread as security updates are available all throughout can t you add the security fixes to that image mar pdt matt dunn replied thanks for the info i’ve requested that we rectify this in our safari image i’ll put this case on hold for now and get back to you once i have an update regards matt
| 1
|
219,821
| 17,113,880,922
|
IssuesEvent
|
2021-07-10 23:15:45
|
rchain-community/rgov
|
https://api.github.com/repos/rchain-community/rgov
|
closed
|
Create Test Scripts for displayVote
|
enhancement test
|
After discussion with Jim the next set of test scripts will be for:
- [ ] displayVote.rho
|
1.0
|
Create Test Scripts for displayVote - After discussion with Jim the next set of test scripts will be for:
- [ ] displayVote.rho
|
test
|
create test scripts for displayvote after discussion with jim the next set of test scripts will be for displayvote rho
| 1
|
318,997
| 23,750,207,017
|
IssuesEvent
|
2022-08-31 19:51:20
|
AgnostiqHQ/covalent
|
https://api.github.com/repos/AgnostiqHQ/covalent
|
opened
|
Update file transfer how-to
|
documentation good first issue improvement
|
### What should we add?
List of typos / grammatical errors:
- [ ] `We can perform file transfer operations pre or post electron execution here we illustrate how to perform file transfer using Rsync locally and remotely via SSH.`
- [ ] `We first define a source & destination filepath where we want to transfer a file from the source_filepath location to the destination_filepath location as well as create an empty file in source_filepath to have a file to transfer.` (Run on sentence?)
- [ ] `# Dispatch a workflow to transfer from source to destination, and write to destination file`. (Comma should be removed)
- [ ] `# read from destination file which we wrote to` change to `# Read contents of destination file.`
- [ ] `After executing the workflow we now see a copy of the file (source_filepath) located in my_dest_file. This file transfer occured prior to electon execution.` (The flow is awkward + missing comma in the first sentence)
- [ ] `After workflow execution the file located at source_filepath will be transfered to host 44.202.86.215 in the host’s filesystem (/home/ubuntu/my_dest_file). This file transfer occurs after electron execution.` (Same issues as above)
- [ ] `Similarly we can perform file transfers using Rsync via SSH in order to transfer a file located in source_filepath to a remote host’s filesystem located at /home/ubuntu/my_dest_file` (Missing period at the end of this sentence)
- [ ] `We can perform file transfer between an S3 bucket and local filesystem using the boto3 library. Here we show a simple example where a zip file is downloaded from the S3 bucket before its execution. The electron performs necessary operations on them and the processed files are uploaded back to the S3 bucket.` ->
`We can transfer files between a S3 bucket and the local filesystem using the `boto3` library. Below, we show an example where:
1. A zip file is downloaded from a S3 bucket before the electron execution.
2. The electron is executed and processed files are uploaded back to the S3 bucket.`
|
1.0
|
Update file transfer how-to - ### What should we add?
List of typos / grammatical errors:
- [ ] `We can perform file transfer operations pre or post electron execution here we illustrate how to perform file transfer using Rsync locally and remotely via SSH.`
- [ ] `We first define a source & destination filepath where we want to transfer a file from the source_filepath location to the destination_filepath location as well as create an empty file in source_filepath to have a file to transfer.` (Run on sentence?)
- [ ] `# Dispatch a workflow to transfer from source to destination, and write to destination file`. (Comma should be removed)
- [ ] `# read from destination file which we wrote to` change to `# Read contents of destination file.`
- [ ] `After executing the workflow we now see a copy of the file (source_filepath) located in my_dest_file. This file transfer occured prior to electon execution.` (The flow is awkward + missing comma in the first sentence)
- [ ] `After workflow execution the file located at source_filepath will be transfered to host 44.202.86.215 in the host’s filesystem (/home/ubuntu/my_dest_file). This file transfer occurs after electron execution.` (Same issues as above)
- [ ] `Similarly we can perform file transfers using Rsync via SSH in order to transfer a file located in source_filepath to a remote host’s filesystem located at /home/ubuntu/my_dest_file` (Missing period at the end of this sentence)
- [ ] `We can perform file transfer between an S3 bucket and local filesystem using the boto3 library. Here we show a simple example where a zip file is downloaded from the S3 bucket before its execution. The electron performs necessary operations on them and the processed files are uploaded back to the S3 bucket.` ->
`We can transfer files between a S3 bucket and the local filesystem using the `boto3` library. Below, we show an example where:
1. A zip file is downloaded from a S3 bucket before the electron execution.
2. The electron is executed and processed files are uploaded back to the S3 bucket.`
|
non_test
|
update file transfer how to what should we add list of typos grammatical errors we can perform file transfer operations pre or post electron execution here we illustrate how to perform file transfer using rsync locally and remotely via ssh we first define a source destination filepath where we want to transfer a file from the source filepath location to the destination filepath location as well as create an empty file in source filepath to have a file to transfer run on sentence dispatch a workflow to transfer from source to destination and write to destination file comma should be removed read from destination file which we wrote to change to read contents of destination file after executing the workflow we now see a copy of the file source filepath located in my dest file this file transfer occured prior to electon execution the flow is awkward missing comma in the first sentence after workflow execution the file located at source filepath will be transfered to host in the host’s filesystem home ubuntu my dest file this file transfer occurs after electron execution same issues as above similarly we can perform file transfers using rsync via ssh in order to transfer a file located in source filepath to a remote host’s filesystem located at home ubuntu my dest file missing period at the end of this sentence we can perform file transfer between an bucket and local filesystem using the library here we show a simple example where a zip file is downloaded from the bucket before its execution the electron performs necessary operations on them and the processed files are uploaded back to the bucket we can transfer files between a bucket and the local filesystem using the library below we show an example where a zip file is downloaded from a bucket before the electron execution the electron is executed and processed files are uploaded back to the bucket
| 0
|
238,148
| 19,701,385,973
|
IssuesEvent
|
2022-01-12 16:57:35
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/bulk_create·ts - saved objects security and spaces enabled _bulk_create user with no access within the default space "after all" hook for "should return 403 forbidden [hiddentype/any]"
|
failed-test
|
A test failed on a tracked branch
```
ResponseError: x_content_parse_exception: [parsing_exception] Reason: [1:65] [ids] unknown field [type]
at onBody (/opt/local-ssd/buildkite/builds/kb-n2-4-bf1ddb6b4a216c51/elastic/kibana-7-dot-latest-es-forward-compatibility/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:367:23)
at IncomingMessage.onEnd (/opt/local-ssd/buildkite/builds/kb-n2-4-bf1ddb6b4a216c51/elastic/kibana-7-dot-latest-es-forward-compatibility/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:291:11)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
meta: {
body: { error: [Object], status: 400 },
statusCode: 400,
headers: {
'x-elastic-product': 'Elasticsearch',
'content-type': 'application/json;charset=utf-8',
'content-length': '422'
},
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 0,
aborted: false
}
}
}
```
First failure: [CI Build - 7.17](https://buildkite.com/elastic/kibana-7-dot-latest-es-forward-compatibility/builds/1#e408c0a1-ecf6-4057-87fb-386a95c671b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/bulk_create·ts","test.name":"saved objects security and spaces enabled _bulk_create user with no access within the default space \"after all\" hook for \"should return 403 forbidden [hiddentype/any]\"","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/bulk_create·ts - saved objects security and spaces enabled _bulk_create user with no access within the default space "after all" hook for "should return 403 forbidden [hiddentype/any]" - A test failed on a tracked branch
```
ResponseError: x_content_parse_exception: [parsing_exception] Reason: [1:65] [ids] unknown field [type]
at onBody (/opt/local-ssd/buildkite/builds/kb-n2-4-bf1ddb6b4a216c51/elastic/kibana-7-dot-latest-es-forward-compatibility/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:367:23)
at IncomingMessage.onEnd (/opt/local-ssd/buildkite/builds/kb-n2-4-bf1ddb6b4a216c51/elastic/kibana-7-dot-latest-es-forward-compatibility/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:291:11)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
meta: {
body: { error: [Object], status: 400 },
statusCode: 400,
headers: {
'x-elastic-product': 'Elasticsearch',
'content-type': 'application/json;charset=utf-8',
'content-length': '422'
},
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 0,
aborted: false
}
}
}
```
First failure: [CI Build - 7.17](https://buildkite.com/elastic/kibana-7-dot-latest-es-forward-compatibility/builds/1#e408c0a1-ecf6-4057-87fb-386a95c671b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/bulk_create·ts","test.name":"saved objects security and spaces enabled _bulk_create user with no access within the default space \"after all\" hook for \"should return 403 forbidden [hiddentype/any]\"","test.failCount":1}} -->
|
test
|
failing test x pack saved object api integration tests security and spaces x pack test saved object api integration security and spaces apis bulk create·ts saved objects security and spaces enabled bulk create user with no access within the default space after all hook for should return forbidden a test failed on a tracked branch responseerror x content parse exception reason unknown field at onbody opt local ssd buildkite builds kb elastic kibana dot latest es forward compatibility kibana node modules elastic elasticsearch lib transport js at incomingmessage onend opt local ssd buildkite builds kb elastic kibana dot latest es forward compatibility kibana node modules elastic elasticsearch lib transport js at incomingmessage emit node events at endreadablent node internal streams readable at processticksandrejections node internal process task queues meta body error status statuscode headers x elastic product elasticsearch content type application json charset utf content length meta context null request name elasticsearch js connection attempts aborted false first failure
| 1
|
168,654
| 13,098,131,963
|
IssuesEvent
|
2020-08-03 18:50:23
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: cdc/ledger/rangefeed=true failed
|
C-test-failure O-roachtest O-robot branch-release-19.2 release-blocker
|
[(roachtest).cdc/ledger/rangefeed=true failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2041142&tab=buildLog) on [release-19.2@6318430c8f0ca0cafb9a34828c9ccdd7ecff68e1](https://github.com/cockroachdb/cockroach/commits/6318430c8f0ca0cafb9a34828c9ccdd7ecff68e1):
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/ledger/rangefeed=true/run_1
cdc.go:923,cdc.go:218,cdc.go:609,test_runner.go:753: max latency was more than allowed: 15m36.592741325s vs 1m0s
```
<details><summary>More</summary><p>
Artifacts: [/cdc/ledger/rangefeed=true](https://teamcity.cockroachdb.com/viewLog.html?buildId=2041142&tab=artifacts#/cdc/ledger/rangefeed=true)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Fledger%2Frangefeed%3Dtrue.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: cdc/ledger/rangefeed=true failed - [(roachtest).cdc/ledger/rangefeed=true failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2041142&tab=buildLog) on [release-19.2@6318430c8f0ca0cafb9a34828c9ccdd7ecff68e1](https://github.com/cockroachdb/cockroach/commits/6318430c8f0ca0cafb9a34828c9ccdd7ecff68e1):
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/ledger/rangefeed=true/run_1
cdc.go:923,cdc.go:218,cdc.go:609,test_runner.go:753: max latency was more than allowed: 15m36.592741325s vs 1m0s
```
<details><summary>More</summary><p>
Artifacts: [/cdc/ledger/rangefeed=true](https://teamcity.cockroachdb.com/viewLog.html?buildId=2041142&tab=artifacts#/cdc/ledger/rangefeed=true)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Fledger%2Frangefeed%3Dtrue.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest cdc ledger rangefeed true failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts cdc ledger rangefeed true run cdc go cdc go cdc go test runner go max latency was more than allowed vs more artifacts powered by
| 1
|
39,323
| 6,734,898,195
|
IssuesEvent
|
2017-10-18 19:43:17
|
Esri/solutions-erg-widget
|
https://api.github.com/repos/Esri/solutions-erg-widget
|
closed
|
Documentation - Use the ERG Widget - Missing space
|
3 - Verify C-XS G-Documentation
|
There is a space missing between for and using in the last paragraph:

|
1.0
|
Documentation - Use the ERG Widget - Missing space - There is a space missing between for and using in the last paragraph:

|
non_test
|
documentation use the erg widget missing space there is a space missing between for and using in the last paragraph
| 0
|
457,832
| 13,163,032,828
|
IssuesEvent
|
2020-08-10 23:08:24
|
buddyboss/buddyboss-platform
|
https://api.github.com/repos/buddyboss/buddyboss-platform
|
opened
|
BB REST API : Gets Activities of a user_id
|
bug priority: medium
|
**Describe the bug**
I try to list activities of a specific user_id with the request below. But that request returns too many activities (I can't filter activities specific of a certain user).
`GET {{host}}/wp-json/buddyboss/v1/activity?user_id=12`
What I want to achieve is to retrieve the same activities listed on the page : http://buddyboss.localhost/members/bb-neville/activity/
Here are the parameters used by this page when calling the function get in the file class-bp-activitiy-activity.php. As you can see, the field "scope" contains 2 parameters. But because this field is a "String" in the BB REST API, we can't request the same activities from the REST API.
```
Array
(
[page] => 1
[per_page] => 20
[max] =>
[sort] => DESC
[privacy] =>
[search_terms] =>
[meta_query] =>
[date_query] =>
[filter_query] =>
[filter] => Array
(
[user_id] => 12
[object] =>
[action] =>
[primary_id] =>
[secondary_id] =>
[offset] =>
[since] =>
)
[scope] => just-me,mentions
[display_comments] => threaded
[show_hidden] =>
[exclude] =>
[in] =>
[spam] => ham_only
[update_meta_cache] => 1
[count_total] =>
[fields] => all
)
```
In order to correct this issue, I suggest 2 modifications :
1) Modify the scope parameter to become an array (in place of a String)
2) When user_id is set and scope is not set (via the rest api parameters). Automatically set scope = [just-me,mentions]
Regards,
|
1.0
|
BB REST API : Gets Activities of a user_id - **Describe the bug**
I try to list activities of a specific user_id with the request below. But that request returns too many activities (I can't filter activities specific of a certain user).
`GET {{host}}/wp-json/buddyboss/v1/activity?user_id=12`
What I want to achieve is to retrieve the same activities listed on the page : http://buddyboss.localhost/members/bb-neville/activity/
Here are the parameters used by this page when calling the function get in the file class-bp-activitiy-activity.php. As you can see, the field "scope" contains 2 parameters. But because this field is a "String" in the BB REST API, we can't request the same activities from the REST API.
```
Array
(
[page] => 1
[per_page] => 20
[max] =>
[sort] => DESC
[privacy] =>
[search_terms] =>
[meta_query] =>
[date_query] =>
[filter_query] =>
[filter] => Array
(
[user_id] => 12
[object] =>
[action] =>
[primary_id] =>
[secondary_id] =>
[offset] =>
[since] =>
)
[scope] => just-me,mentions
[display_comments] => threaded
[show_hidden] =>
[exclude] =>
[in] =>
[spam] => ham_only
[update_meta_cache] => 1
[count_total] =>
[fields] => all
)
```
In order to correct this issue, I suggest 2 modifications :
1) Modify the scope parameter to become an array (in place of a String)
2) When user_id is set and scope is not set (via the rest api parameters). Automatically set scope = [just-me,mentions]
Regards,
|
non_test
|
bb rest api gets activities of a user id describe the bug i try to list activities of a specific user id with the request below but that request returns too many activities i can t filter activities specific of a certain user get host wp json buddyboss activity user id what i want to achieve is to retrieve the same activities listed on the page here are the parameters used by this page when calling the function get in the file class bp activitiy activity php as you can see the field scope contains parameters but because this field is a string in the bb rest api we can t request the same activities from the rest api array desc array just me mentions threaded ham only all in order to correct this issue i suggest modifications modify the scope parameter to become an array in place of a string when user id is set and scope is not set via the rest api parameters automatically set scope regards
| 0
|
740,632
| 25,761,402,855
|
IssuesEvent
|
2022-12-08 20:52:44
|
rmlockwood/FLExTrans
|
https://api.github.com/repos/rmlockwood/FLExTrans
|
closed
|
Stem Names omitted from affixes with more features than referenced in the Stem Name in FLEx
|
bug high priority
|
The Stem Name functionality in FLEx requires the user to list which features belong to affixes for which that allomorph of a stem should be chosen.
If the features are fully specified (e.g., third person, singular number), then FLExTrans is currently correctly adding the properties in the affix lexicon to the relevant affix.
If the features on the Stem Name are underspecified (e.g., "singular" for affixes that show both number and person), then the property is not getting added to all the affixes that need it. (e.g., if an affix has features [pers:3 num:sg] the property should be added, but it is not)
An example of this problem is in the Google Drive folder https://drive.google.com/drive/folders/1_0x1TzjlvqOWFAyzeqnfKuetYU3Ulsdr?usp=sharing
|
1.0
|
Stem Names omitted from affixes with more features than referenced in the Stem Name in FLEx - The Stem Name functionality in FLEx requires the user to list which features belong to affixes for which that allomorph of a stem should be chosen.
If the features are fully specified (e.g., third person, singular number), then FLExTrans is currently correctly adding the properties in the affix lexicon to the relevant affix.
If the features on the Stem Name are underspecified (e.g., "singular" for affixes that show both number and person), then the property is not getting added to all the affixes that need it. (e.g., if an affix has features [pers:3 num:sg] the property should be added, but it is not)
An example of this problem is in the Google Drive folder https://drive.google.com/drive/folders/1_0x1TzjlvqOWFAyzeqnfKuetYU3Ulsdr?usp=sharing
|
non_test
|
stem names omitted from affixes with more features than referenced in the stem name in flex the stem name functionality in flex requires the user to list which features belong to affixes for which that allomorph of a stem should be chosen if the features are fully specified e g third person singular number then flextrans is currently correctly adding the properties in the affix lexicon to the relevant affix if the features on the stem name are underspecified e g singular for affixes that show both number and person then the property is not getting added to all the affixes that need it e g if an affix has features the property should be added but it is not an example of this problem is in the google drive folder
| 0
|
530,419
| 15,422,730,106
|
IssuesEvent
|
2021-03-05 14:44:59
|
jahirfiquitiva/Frames
|
https://api.github.com/repos/jahirfiquitiva/Frames
|
closed
|
Sec media storage crash
|
Priority: Low Status: Invalid Status: Not Reproducible
|
Whenever I try to save multiple wallpapers on apps like joywalls, wallflair, reev pro icon pack which use your wallpaper dashboard, the app sec media process and download manager crashes along with the app.
This is only happening on Samsung one ui3 3.0 and 3.1 on my s10 and s21 ultra. Apps work done on one up 2.5 and aosp rom.
I am using latest versions of all these apps and have tried clearing cache and data off crashing apps as well as cache from recovery. <!--
Any HTML comment will be stripped when the markdown is rendered, so you don't need to delete them.
-->
- [x] I have verified there are no duplicate active or recent bugs, questions, or requests
- [x] I have verified that I am using the latest version.
### Device/App info:
- Frames Version: `?`
- Android version: `?`
- Device Manufacturer: `?`
- Device Name: `?`
### Describe the bug
A clear and concise description of what the bug is.
### Reproduction Steps
1.
2.
3.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Screenshots
<!-- If applicable, add screenshots or videos to help explain your problem. -->
### Code and/or Logs
<!--
Please wrap code with correct syntax highlighting. You can remove it if you think it isn't necessary.
-->
```kotlin
println("Hello, world!")
```
<!--
If you are getting an error in the LogCat, paste here the stack trace.
Please wrap logs with Gradle syntax highlighting (it makes them look better).
-->
```Gradle
java.lang.RuntimeException: This is an example Exception log
at com.package.name.HelloWorld
at com.package.name.HelloWorld$ThisIsNotARealLog
at android.app.Instrumentation.callActivityOnResume(Instrumentation.kt)
```
### Additional context
<!-- Add any other context about the problem here. -->
|
1.0
|
Sec media storage crash - Whenever I try to save multiple wallpapers on apps like joywalls, wallflair, reev pro icon pack which use your wallpaper dashboard, the app sec media process and download manager crashes along with the app.
This is only happening on Samsung one ui3 3.0 and 3.1 on my s10 and s21 ultra. Apps work done on one up 2.5 and aosp rom.
I am using latest versions of all these apps and have tried clearing cache and data off crashing apps as well as cache from recovery. <!--
Any HTML comment will be stripped when the markdown is rendered, so you don't need to delete them.
-->
- [x] I have verified there are no duplicate active or recent bugs, questions, or requests
- [x] I have verified that I am using the latest version.
### Device/App info:
- Frames Version: `?`
- Android version: `?`
- Device Manufacturer: `?`
- Device Name: `?`
### Describe the bug
A clear and concise description of what the bug is.
### Reproduction Steps
1.
2.
3.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Screenshots
<!-- If applicable, add screenshots or videos to help explain your problem. -->
### Code and/or Logs
<!--
Please wrap code with correct syntax highlighting. You can remove it if you think it isn't necessary.
-->
```kotlin
println("Hello, world!")
```
<!--
If you are getting an error in the LogCat, paste here the stack trace.
Please wrap logs with Gradle syntax highlighting (it makes them look better).
-->
```Gradle
java.lang.RuntimeException: This is an example Exception log
at com.package.name.HelloWorld
at com.package.name.HelloWorld$ThisIsNotARealLog
at android.app.Instrumentation.callActivityOnResume(Instrumentation.kt)
```
### Additional context
<!-- Add any other context about the problem here. -->
|
non_test
|
sec media storage crash whenever i try to save multiple wallpapers on apps like joywalls wallflair reev pro icon pack which use your wallpaper dashboard the app sec media process and download manager crashes along with the app this is only happening on samsung one and on my and ultra apps work done on one up and aosp rom i am using latest versions of all these apps and have tried clearing cache and data off crashing apps as well as cache from recovery any html comment will be stripped when the markdown is rendered so you don t need to delete them i have verified there are no duplicate active or recent bugs questions or requests i have verified that i am using the latest version device app info frames version android version device manufacturer device name describe the bug a clear and concise description of what the bug is reproduction steps expected behavior screenshots code and or logs please wrap code with correct syntax highlighting you can remove it if you think it isn t necessary kotlin println hello world if you are getting an error in the logcat paste here the stack trace please wrap logs with gradle syntax highlighting it makes them look better gradle java lang runtimeexception this is an example exception log at com package name helloworld at com package name helloworld thisisnotareallog at android app instrumentation callactivityonresume instrumentation kt additional context
| 0
|
280,151
| 24,280,777,019
|
IssuesEvent
|
2022-09-28 17:11:48
|
PharmaLedger-IMI/eco-iot-pmed-workspace
|
https://api.github.com/repos/PharmaLedger-IMI/eco-iot-pmed-workspace
|
closed
|
UI improvments for notification list
|
enhancement business-testing ux
|
Notifications show in reverse order but with no clear indication of how a patient should review them. Suggest making that more intuitive (with 1, 2, 3 or something).
|
1.0
|
UI improvments for notification list - Notifications show in reverse order but with no clear indication of how a patient should review them. Suggest making that more intuitive (with 1, 2, 3 or something).
|
test
|
ui improvments for notification list notifications show in reverse order but with no clear indication of how a patient should review them suggest making that more intuitive with or something
| 1
|
69,348
| 13,237,484,989
|
IssuesEvent
|
2020-08-18 21:47:13
|
DataBiosphere/azul
|
https://api.github.com/repos/DataBiosphere/azul
|
closed
|
DSS adapter fails on prefixes starting with 0s
|
bug code demoed orange
|
Due to a bug in `get_prefix_list()`, the DSS Adapter will process an incorrect list of bundles when given a prefix starting with `0`. The problem stems from the prefix list being generated using the `hex()` function which ignores prefixed `0`'s
```
>>> from scripts.dss_v2_apdapter import DSSv2Adapter
# Correct examples:
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, 'a')
['a']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, 'aa')
['aa']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '0')
['0']
# Incorrect examples:
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '00')
['0']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '00a')
['a']
```
|
1.0
|
DSS adapter fails on prefixes starting with 0s - Due to a bug in `get_prefix_list()`, the DSS Adapter will process an incorrect list of bundles when given a prefix starting with `0`. The problem stems from the prefix list being generated using the `hex()` function which ignores prefixed `0`'s
```
>>> from scripts.dss_v2_apdapter import DSSv2Adapter
# Correct examples:
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, 'a')
['a']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, 'aa')
['aa']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '0')
['0']
# Incorrect examples:
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '00')
['0']
>>> DSSv2Adapter.get_prefix_list(DSSv2Adapter, '00a')
['a']
```
|
non_test
|
dss adapter fails on prefixes starting with due to a bug in get prefix list the dss adapter will process an incorrect list of bundles when given a prefix starting with the problem stems from the prefix list being generated using the hex function which ignores prefixed s from scripts dss apdapter import correct examples get prefix list a get prefix list aa get prefix list incorrect examples get prefix list get prefix list
| 0
|
114,815
| 9,761,614,225
|
IssuesEvent
|
2019-06-05 09:12:09
|
pingcap/tidb-operator
|
https://api.github.com/repos/pingcap/tidb-operator
|
closed
|
TiDB Binlog backup not works on stability test
|
bug stability-tests
|
Now the operation steps are:
- stop insert data into the source TiDB cluster
- deploy ad-hoc backup
- restore from the backup to a new TiDB cluster
- ensure the data is correct
- deploy the incremental backup (pump and drainer)
- ensure the data is correct again
The correct operation steps should be:
- **don't** stop insert data into the source TiDB cluster
- deploy the incremental backup (only pump)
- deploy ad-hoc backup and get the TS from this backup logs
- restore from the backup to a new TiDB cluster
- deploy the incremental backup (pump and drainer with TS)
- stop insert data into the source TiDB cluster
- ensure the data is correct in the final
|
1.0
|
TiDB Binlog backup not works on stability test - Now the operation steps are:
- stop insert data into the source TiDB cluster
- deploy ad-hoc backup
- restore from the backup to a new TiDB cluster
- ensure the data is correct
- deploy the incremental backup (pump and drainer)
- ensure the data is correct again
The correct operation steps should be:
- **don't** stop insert data into the source TiDB cluster
- deploy the incremental backup (only pump)
- deploy ad-hoc backup and get the TS from this backup logs
- restore from the backup to a new TiDB cluster
- deploy the incremental backup (pump and drainer with TS)
- stop insert data into the source TiDB cluster
- ensure the data is correct in the final
|
test
|
tidb binlog backup not works on stability test now the operation steps are stop insert data into the source tidb cluster deploy ad hoc backup restore from the backup to a new tidb cluster ensure the data is correct deploy the incremental backup pump and drainer ensure the data is correct again the correct operation steps should be don t stop insert data into the source tidb cluster deploy the incremental backup only pump deploy ad hoc backup and get the ts from this backup logs restore from the backup to a new tidb cluster deploy the incremental backup pump and drainer with ts stop insert data into the source tidb cluster ensure the data is correct in the final
| 1
|
57
| 2,516,123,874
|
IssuesEvent
|
2015-01-15 23:29:35
|
GsDevKit/gsApplicationTools
|
https://api.github.com/repos/GsDevKit/gsApplicationTools
|
closed
|
need better commit failure handling in GemServer>>startGems
|
in process
|
While running Seaside gem server tests ran into a commit conflict during startGems ... and when a commit failure occurs, you really need to handle it correctly:
- acquire conflicts dictionary
- abort
- throw error
otherwise you will be in trouble on the next attempt to commit and not really understand what happened
|
1.0
|
need better commit failure handling in GemServer>>startGems - While running Seaside gem server tests ran into a commit conflict during startGems ... and when a commit failure occurs, you really need to handle it correctly:
- acquire conflicts dictionary
- abort
- throw error
otherwise you will be in trouble on the next attempt to commit and not really understand what happened
|
non_test
|
need better commit failure handling in gemserver startgems while running seaside gem server tests ran into a commit conflict during startgems and when a commit failure occurs you really need to handle it correctly acquire conflicts dictionary abort throw error otherwise you will be in trouble on the next attempt to commit and not really understand what happened
| 0
|
308,413
| 26,605,516,123
|
IssuesEvent
|
2023-01-23 19:00:41
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
It takes about 16 seconds for one new created IOT blob container tab auto opens
|
:heavy_check_mark: won't fix 🧪 testing :gear: blobs
|
**Storage Explorer Version**: 1.28.0-dev
**Build Number**: 20230106.1
**Branch**: main
**Platform/OS**: Windows 10
**Architecture**: ia32
**How Found**: From running test cases
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Launch Storage Explorer.
2. Connect to a blob storage module on edge using the connection string.
3. Switch to the attached storage account -> Blob Containers.
4. Create a blob container.
5. Check whether the tab opens without delay.
## Expected Experience ##
The tab opens without delay.
## Actual Experience ##
It takes about 16 seconds to open the tab.
## Additional Context ##
Here is the record:

|
1.0
|
It takes about 16 seconds for one new created IOT blob container tab auto opens - **Storage Explorer Version**: 1.28.0-dev
**Build Number**: 20230106.1
**Branch**: main
**Platform/OS**: Windows 10
**Architecture**: ia32
**How Found**: From running test cases
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Launch Storage Explorer.
2. Connect to a blob storage module on edge using the connection string.
3. Switch to the attached storage account -> Blob Containers.
4. Create a blob container.
5. Check whether the tab opens without delay.
## Expected Experience ##
The tab opens without delay.
## Actual Experience ##
It takes about 16 seconds to open the tab.
## Additional Context ##
Here is the record:

|
test
|
it takes about seconds for one new created iot blob container tab auto opens storage explorer version dev build number branch main platform os windows architecture how found from running test cases regression from not a regression steps to reproduce launch storage explorer connect to a blob storage module on edge using the connection string switch to the attached storage account blob containers create a blob container check whether the tab opens without delay expected experience the tab opens without delay actual experience it takes about seconds to open the tab additional context here is the record
| 1
|
184,246
| 14,283,113,437
|
IssuesEvent
|
2020-11-23 10:32:51
|
ICIJ/datashare
|
https://api.github.com/repos/ICIJ/datashare
|
closed
|
Ansible refactor
|
enhancement full stack need testing
|
- [x] Replace the docker role by our own docker role
- [x] Move datashare-staging config from the role/defaults/main.yml to playbook/staging/group_vars/all/all.yml
- [x] Move production from monolith repo to datashare playbook
- [x] Create a new server for production
- [x] Extract preview as a list of tasks
- [x] Extract redis as a list of tasks
- [x] Extract elasticsearch a list of tasks
- [x] Extract postgres a list of tasks
- [x] Enable these lists conditionaly
- [x] Add theses roles into datashare-staging playbook
- [x] Rename the role datashare-staging into datashare
|
1.0
|
Ansible refactor - - [x] Replace the docker role by our own docker role
- [x] Move datashare-staging config from the role/defaults/main.yml to playbook/staging/group_vars/all/all.yml
- [x] Move production from monolith repo to datashare playbook
- [x] Create a new server for production
- [x] Extract preview as a list of tasks
- [x] Extract redis as a list of tasks
- [x] Extract elasticsearch a list of tasks
- [x] Extract postgres a list of tasks
- [x] Enable these lists conditionaly
- [x] Add theses roles into datashare-staging playbook
- [x] Rename the role datashare-staging into datashare
|
test
|
ansible refactor replace the docker role by our own docker role move datashare staging config from the role defaults main yml to playbook staging group vars all all yml move production from monolith repo to datashare playbook create a new server for production extract preview as a list of tasks extract redis as a list of tasks extract elasticsearch a list of tasks extract postgres a list of tasks enable these lists conditionaly add theses roles into datashare staging playbook rename the role datashare staging into datashare
| 1
|
244,778
| 7,879,861,474
|
IssuesEvent
|
2018-06-26 14:28:39
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
apps.facebook.com - see bug description
|
browser-firefox priority-critical
|
<!-- @browser: Firefox 62.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://apps.facebook.com/hititrich/?fb_source=bookmark&ref=bookmarks&count=0&fb_bmpos=_0
**Browser / Version**: Firefox 62.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: same thing page not loading
**Steps to Reproduce**:
differ browser
[](https://webcompat.com/uploads/2018/6/ebb46441-f369-46c1-b862-989e80eae072.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180621152331</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: aurora</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
apps.facebook.com - see bug description - <!-- @browser: Firefox 62.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://apps.facebook.com/hititrich/?fb_source=bookmark&ref=bookmarks&count=0&fb_bmpos=_0
**Browser / Version**: Firefox 62.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: same thing page not loading
**Steps to Reproduce**:
differ browser
[](https://webcompat.com/uploads/2018/6/ebb46441-f369-46c1-b862-989e80eae072.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180621152331</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: aurora</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
apps facebook com see bug description url browser version firefox operating system windows tested another browser yes problem type something else description same thing page not loading steps to reproduce differ browser browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel aurora from with ❤️
| 0
|
263,480
| 19,912,494,999
|
IssuesEvent
|
2022-01-25 18:38:21
|
nasa/fprime
|
https://api.github.com/repos/nasa/fprime
|
closed
|
Incorrect references to non-existent MagicDraw files
|
bug Documentation Easy First Issue
|
| | |
|:---|:---|
|**_F´ Version_**| 3.0.0 |
|**_Affected Component_**| Documentation |
---
## Problem Description
The next two issues, #1166 #974, required a project purge to remove the now obsolete files, including the mdxml.
Since the @LeStarch commit https://github.com/nasa/fprime/pull/1167/commits/97de3c264553a5348ecf6c26d9e30cdc95aa2567, the last traces of MagicDraw support have been "officially" effective, including the removal of the mdxml files.
However, some component ``README`` files still contain references to MagicDraw project files.
Exemple:
https://github.com/nasa/fprime/blob/313ef0556baec4a981f03e66de64d9193a84b867/Fw/Cmd/README#L7
## How to Reproduce
1. Launch ``grep -nr 'mdxml'`` inside fprime folder
2. See the following output:
```ps
fprime$ grep -nr 'mdxml'
.dockerignore:4:**.mdxml
Binary file .git/objects/pack/pack-ba5f57e658c47e79b316f15efb42028e0016b801.pack matches
.github/actions/spelling/excludes.txt:21:\.mdxml$
.github/actions/spelling/expect.txt:935:mdxml
.gitignore:36:*.mdxml.bak
docs/UsersGuide/dev/magicdraw.md:18:or examples will work. Open up the `Ref/Top/REFApplication.mdxml` file to begin. If import errors arise, the user will
Fw/Cmd/README:7:CmdModule.mdxml - MagicDraw project file describing command interface
Fw/Com/README:4:ComModule.mdxml - A MagicDraw project file describing a communication buffer port
Fw/Log/README:8:LogModule.mdxml - MagicDraw project file describing event interface
Fw/Prm/README:7:PrmModule.mdxml - MagicDraw project file that describes the parameter interface
Fw/Time/README:5:TimeModule.mdxml - MagicDraw project file describing time interface
Fw/Tlm/README:7:TlmModule.mdxml - MagicDraw project file for telemetry interface
Svc/ActiveLogger/README:7:ActiveLoggerModule.mdxml - MagicDraw project file describing the active logger
Svc/ActiveRateGroup/README:7:ActiveRateGroupModule.mdxml - The MagicDraw project file describing the component
Svc/CmdDispatcher/README:7:CmdDispatcherModule.mdxml - MagicDraw project file describing component
Svc/PassiveConsoleTextLogger/README:5:PassiveTextLoggerModule.mdxml - MagicDraw project file that describes the component
Svc/PolyDb/README:7:PolyDbModule.mdxml - MagicDraw project file that describes the PolyDb component
Svc/PolyIf/README:5:PolyIfModule.mdxml - MagicDraw project file that describes the interface
Svc/PrmDb/README:10:PrmDbModule.mdxml - MagicDraw project file that describes the parameter database component
Svc/RateGroupDriver/README:7:RateGroupDriverModule.mdxml - MagicDraw project file that describes rate group driver component
Svc/Sched/README:4:SchedModule.mdxml - MagicDraw project file that describes scheduler port
Svc/Time/README:4:TimeCompModule.mdxml - The MagicDraw project file that describes the time component.
```
## Expected Behavior
Do not provide misleading or incorrect information about non-existent files to the user when browsing the repository or reading the component documentation.
---
🏷️ Advised issue label: ``Documentation`` ``Easy First Issue``.
|
1.0
|
Incorrect references to non-existent MagicDraw files - | | |
|:---|:---|
|**_F´ Version_**| 3.0.0 |
|**_Affected Component_**| Documentation |
---
## Problem Description
The next two issues, #1166 #974, required a project purge to remove the now obsolete files, including the mdxml.
Since the @LeStarch commit https://github.com/nasa/fprime/pull/1167/commits/97de3c264553a5348ecf6c26d9e30cdc95aa2567, the last traces of MagicDraw support have been "officially" effective, including the removal of the mdxml files.
However, some component ``README`` files still contain references to MagicDraw project files.
Exemple:
https://github.com/nasa/fprime/blob/313ef0556baec4a981f03e66de64d9193a84b867/Fw/Cmd/README#L7
## How to Reproduce
1. Launch ``grep -nr 'mdxml'`` inside fprime folder
2. See the following output:
```ps
fprime$ grep -nr 'mdxml'
.dockerignore:4:**.mdxml
Binary file .git/objects/pack/pack-ba5f57e658c47e79b316f15efb42028e0016b801.pack matches
.github/actions/spelling/excludes.txt:21:\.mdxml$
.github/actions/spelling/expect.txt:935:mdxml
.gitignore:36:*.mdxml.bak
docs/UsersGuide/dev/magicdraw.md:18:or examples will work. Open up the `Ref/Top/REFApplication.mdxml` file to begin. If import errors arise, the user will
Fw/Cmd/README:7:CmdModule.mdxml - MagicDraw project file describing command interface
Fw/Com/README:4:ComModule.mdxml - A MagicDraw project file describing a communication buffer port
Fw/Log/README:8:LogModule.mdxml - MagicDraw project file describing event interface
Fw/Prm/README:7:PrmModule.mdxml - MagicDraw project file that describes the parameter interface
Fw/Time/README:5:TimeModule.mdxml - MagicDraw project file describing time interface
Fw/Tlm/README:7:TlmModule.mdxml - MagicDraw project file for telemetry interface
Svc/ActiveLogger/README:7:ActiveLoggerModule.mdxml - MagicDraw project file describing the active logger
Svc/ActiveRateGroup/README:7:ActiveRateGroupModule.mdxml - The MagicDraw project file describing the component
Svc/CmdDispatcher/README:7:CmdDispatcherModule.mdxml - MagicDraw project file describing component
Svc/PassiveConsoleTextLogger/README:5:PassiveTextLoggerModule.mdxml - MagicDraw project file that describes the component
Svc/PolyDb/README:7:PolyDbModule.mdxml - MagicDraw project file that describes the PolyDb component
Svc/PolyIf/README:5:PolyIfModule.mdxml - MagicDraw project file that describes the interface
Svc/PrmDb/README:10:PrmDbModule.mdxml - MagicDraw project file that describes the parameter database component
Svc/RateGroupDriver/README:7:RateGroupDriverModule.mdxml - MagicDraw project file that describes rate group driver component
Svc/Sched/README:4:SchedModule.mdxml - MagicDraw project file that describes scheduler port
Svc/Time/README:4:TimeCompModule.mdxml - The MagicDraw project file that describes the time component.
```
## Expected Behavior
Do not provide misleading or incorrect information about non-existent files to the user when browsing the repository or reading the component documentation.
---
🏷️ Advised issue label: ``Documentation`` ``Easy First Issue``.
|
non_test
|
incorrect references to non existent magicdraw files f´ version affected component documentation problem description the next two issues required a project purge to remove the now obsolete files including the mdxml since the lestarch commit the last traces of magicdraw support have been officially effective including the removal of the mdxml files however some component readme files still contain references to magicdraw project files exemple how to reproduce launch grep nr mdxml inside fprime folder see the following output ps fprime grep nr mdxml dockerignore mdxml binary file git objects pack pack pack matches github actions spelling excludes txt mdxml github actions spelling expect txt mdxml gitignore mdxml bak docs usersguide dev magicdraw md or examples will work open up the ref top refapplication mdxml file to begin if import errors arise the user will fw cmd readme cmdmodule mdxml magicdraw project file describing command interface fw com readme commodule mdxml a magicdraw project file describing a communication buffer port fw log readme logmodule mdxml magicdraw project file describing event interface fw prm readme prmmodule mdxml magicdraw project file that describes the parameter interface fw time readme timemodule mdxml magicdraw project file describing time interface fw tlm readme tlmmodule mdxml magicdraw project file for telemetry interface svc activelogger readme activeloggermodule mdxml magicdraw project file describing the active logger svc activerategroup readme activerategroupmodule mdxml the magicdraw project file describing the component svc cmddispatcher readme cmddispatchermodule mdxml magicdraw project file describing component svc passiveconsoletextlogger readme passivetextloggermodule mdxml magicdraw project file that describes the component svc polydb readme polydbmodule mdxml magicdraw project file that describes the polydb component svc polyif readme polyifmodule mdxml magicdraw project file that describes the interface svc prmdb readme prmdbmodule mdxml magicdraw project file that describes the parameter database component svc rategroupdriver readme rategroupdrivermodule mdxml magicdraw project file that describes rate group driver component svc sched readme schedmodule mdxml magicdraw project file that describes scheduler port svc time readme timecompmodule mdxml the magicdraw project file that describes the time component expected behavior do not provide misleading or incorrect information about non existent files to the user when browsing the repository or reading the component documentation 🏷️ advised issue label documentation easy first issue
| 0
|
103,457
| 11,356,950,205
|
IssuesEvent
|
2020-01-25 00:56:18
|
JKISoftware/Caraya
|
https://api.github.com/repos/JKISoftware/Caraya
|
closed
|
Error 7002 with test suites - error description needed
|
beta-testing 1.0 documentation
|
Using release 1.0.0.72 with LabVIEW 2019, I see error 7002 get returned from Destroy Test Suite if there is any failing test in the test suite.
Is this intentional? Looks like it gets sent via the Test Assert Factory notifier on any failing test from the test manager VI.
If it is intentional, it would be great to have something other than this get reported:

Perhaps when sending the 7002 error, we could include a more descriptive error message?
|
1.0
|
Error 7002 with test suites - error description needed - Using release 1.0.0.72 with LabVIEW 2019, I see error 7002 get returned from Destroy Test Suite if there is any failing test in the test suite.
Is this intentional? Looks like it gets sent via the Test Assert Factory notifier on any failing test from the test manager VI.
If it is intentional, it would be great to have something other than this get reported:

Perhaps when sending the 7002 error, we could include a more descriptive error message?
|
non_test
|
error with test suites error description needed using release with labview i see error get returned from destroy test suite if there is any failing test in the test suite is this intentional looks like it gets sent via the test assert factory notifier on any failing test from the test manager vi if it is intentional it would be great to have something other than this get reported perhaps when sending the error we could include a more descriptive error message
| 0
|
227,692
| 18,093,064,422
|
IssuesEvent
|
2021-09-22 05:30:24
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Switch from sending membership event for kick to using the /kick API
|
A-User-Info T-Enhancement A-Slash-Commands X-Needs-Testing
|
I created a PR to to Synapse to align Synapse's implementation of membership events with the way that the spec says they should work https://github.com/matrix-org/synapse/pull/10807. The problem with that is it will reintroduce https://github.com/vector-im/element-web/issues/1961 because right now when you click the kick button in MemberInfo or use the /kick slash command Element sends a membership event for that user rather than using the dedicated API https://spec.matrix.org/unstable/client-server-api/#post_matrixclientr0roomsroomidkick
So Element needs to switch over to using the /kick API so that behavior does not come back once the Synapse PR is merged.
- [x] Change slash command to use dedicated /kick API
- [x] Change MemberInfo to use dedicated /kick API
- [x] Investigate if there are any other areas where kicking or banning happen via membership events. The ban and unban buttons in MemberInfo, /ban and /unban slash commands, and the unban button in the room settings seem to all use the proper APIs.
|
1.0
|
Switch from sending membership event for kick to using the /kick API - I created a PR to to Synapse to align Synapse's implementation of membership events with the way that the spec says they should work https://github.com/matrix-org/synapse/pull/10807. The problem with that is it will reintroduce https://github.com/vector-im/element-web/issues/1961 because right now when you click the kick button in MemberInfo or use the /kick slash command Element sends a membership event for that user rather than using the dedicated API https://spec.matrix.org/unstable/client-server-api/#post_matrixclientr0roomsroomidkick
So Element needs to switch over to using the /kick API so that behavior does not come back once the Synapse PR is merged.
- [x] Change slash command to use dedicated /kick API
- [x] Change MemberInfo to use dedicated /kick API
- [x] Investigate if there are any other areas where kicking or banning happen via membership events. The ban and unban buttons in MemberInfo, /ban and /unban slash commands, and the unban button in the room settings seem to all use the proper APIs.
|
test
|
switch from sending membership event for kick to using the kick api i created a pr to to synapse to align synapse s implementation of membership events with the way that the spec says they should work the problem with that is it will reintroduce because right now when you click the kick button in memberinfo or use the kick slash command element sends a membership event for that user rather than using the dedicated api so element needs to switch over to using the kick api so that behavior does not come back once the synapse pr is merged change slash command to use dedicated kick api change memberinfo to use dedicated kick api investigate if there are any other areas where kicking or banning happen via membership events the ban and unban buttons in memberinfo ban and unban slash commands and the unban button in the room settings seem to all use the proper apis
| 1
|
285,643
| 31,155,011,613
|
IssuesEvent
|
2023-08-16 12:36:51
|
Trinadh465/linux-4.1.15_CVE-2018-5873
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2018-5873
|
opened
|
CVE-2016-9178 (Medium) detected in linuxlinux-4.1.52
|
Mend: dependency security vulnerability
|
## CVE-2016-9178 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The __get_user_asm_ex macro in arch/x86/include/asm/uaccess.h in the Linux kernel before 4.7.5 does not initialize a certain integer variable, which allows local users to obtain sensitive information from kernel stack memory by triggering failure of a get_user_ex call.
<p>Publish Date: 2016-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9178>CVE-2016-9178</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9178">https://nvd.nist.gov/vuln/detail/CVE-2016-9178</a></p>
<p>Release Date: 2016-11-28</p>
<p>Fix Resolution: 4.7.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-9178 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2016-9178 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The __get_user_asm_ex macro in arch/x86/include/asm/uaccess.h in the Linux kernel before 4.7.5 does not initialize a certain integer variable, which allows local users to obtain sensitive information from kernel stack memory by triggering failure of a get_user_ex call.
<p>Publish Date: 2016-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9178>CVE-2016-9178</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9178">https://nvd.nist.gov/vuln/detail/CVE-2016-9178</a></p>
<p>Release Date: 2016-11-28</p>
<p>Fix Resolution: 4.7.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details the get user asm ex macro in arch include asm uaccess h in the linux kernel before does not initialize a certain integer variable which allows local users to obtain sensitive information from kernel stack memory by triggering failure of a get user ex call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
195,968
| 14,788,717,705
|
IssuesEvent
|
2021-01-12 09:36:15
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: schemachange/bulkingest failed
|
C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker
|
[(roachtest).schemachange/bulkingest failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574842&tab=buildLog) on [release-20.1@e395c0c7c48a279334f0e94dfb7030a3eafa093f](https://github.com/cockroachdb/cockroach/commits/e395c0c7c48a279334f0e94dfb7030a3eafa093f):
```
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2606
| github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) 2 safe details enclosed
Wraps: (3) output in run_092435.047_n5_workload_run_bulkingest
Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2574842-1610435721-105-n5cpu4:5 -- ./workload run bulkingest --duration 20m0s {pgurl:1-4} --a 100000000 --b 1 --c 1 --payload-bytes 4 returned
| stderr:
| bash: line 1: 4565 Illegal instruction (core dumped) bash -c "./workload run bulkingest --duration 20m0s 'postgres://root@10.128.0.99:26257?sslmode=disable' 'postgres://root@10.128.0.43:26257?sslmode=disable' 'postgres://root@10.128.0.36:26257?sslmode=disable' 'postgres://root@10.128.0.8:26257?sslmode=disable' --a 100000000 --b 1 --c 1 --payload-bytes 4"
| Error: COMMAND_PROBLEM: exit status 132
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ```
| | ./workload run bulkingest --duration 20m0s {pgurl:1-4} --a 100000000 --b 1 --c 1 --payload-bytes 4
| | ```
| Wraps: (3) exit status 132
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2628,schemachange.go:401,test_runner.go:749: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2616
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2624
| main.makeSchemaChangeBulkIngestTest.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/schemachange.go:401
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2672
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2586
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/schemachange/bulkingest](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574842&tab=artifacts#/schemachange/bulkingest)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aschemachange%2Fbulkingest.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: schemachange/bulkingest failed - [(roachtest).schemachange/bulkingest failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574842&tab=buildLog) on [release-20.1@e395c0c7c48a279334f0e94dfb7030a3eafa093f](https://github.com/cockroachdb/cockroach/commits/e395c0c7c48a279334f0e94dfb7030a3eafa093f):
```
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2606
| github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) 2 safe details enclosed
Wraps: (3) output in run_092435.047_n5_workload_run_bulkingest
Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2574842-1610435721-105-n5cpu4:5 -- ./workload run bulkingest --duration 20m0s {pgurl:1-4} --a 100000000 --b 1 --c 1 --payload-bytes 4 returned
| stderr:
| bash: line 1: 4565 Illegal instruction (core dumped) bash -c "./workload run bulkingest --duration 20m0s 'postgres://root@10.128.0.99:26257?sslmode=disable' 'postgres://root@10.128.0.43:26257?sslmode=disable' 'postgres://root@10.128.0.36:26257?sslmode=disable' 'postgres://root@10.128.0.8:26257?sslmode=disable' --a 100000000 --b 1 --c 1 --payload-bytes 4"
| Error: COMMAND_PROBLEM: exit status 132
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ```
| | ./workload run bulkingest --duration 20m0s {pgurl:1-4} --a 100000000 --b 1 --c 1 --payload-bytes 4
| | ```
| Wraps: (3) exit status 132
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2628,schemachange.go:401,test_runner.go:749: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2616
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2624
| main.makeSchemaChangeBulkIngestTest.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/schemachange.go:401
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2672
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2586
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/schemachange/bulkingest](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574842&tab=artifacts#/schemachange/bulkingest)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aschemachange%2Fbulkingest.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest schemachange bulkingest failed on main monitor go home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go github com cockroachdb cockroach vendor golang org x sync errgroup group go home agent work go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s wraps safe details enclosed wraps output in run workload run bulkingest wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run bulkingest duration pgurl a b c payload bytes returned stderr bash line illegal instruction core dumped bash c workload run bulkingest duration postgres root sslmode disable postgres root sslmode disable postgres root sslmode disable postgres root sslmode disable a b c payload bytes error command problem exit status command problem wraps node command with error workload run bulkingest duration pgurl a b c payload bytes wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror cluster go schemachange go test runner go monitor failure monitor task failed t fatal was called attached stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main makeschemachangebulkingesttest home agent work go src github com cockroachdb cockroach pkg cmd roachtest schemachange go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withmessage withstack withstack errutil withmessage withstack withstack errors errorstring more artifacts powered by
| 1
|
199,420
| 22,693,328,202
|
IssuesEvent
|
2022-07-05 01:13:44
|
Kijacode/dotfiles
|
https://api.github.com/repos/Kijacode/dotfiles
|
opened
|
CVE-2019-17571 (High) detected in org.apache.log4j_1.2.15.v201012070815.jar
|
security vulnerability
|
## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>org.apache.log4j_1.2.15.v201012070815.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://eclipse.mirror.garr.it/mirrors/eclipse/webtools/downloads/drops/R3.16.0/S-3.16.0.M3-20191121172023/wtp-repo-S-3.16.0.M3-20191121172023.zip">http://eclipse.mirror.garr.it/mirrors/eclipse/webtools/downloads/drops/R3.16.0/S-3.16.0.M3-20191121172023/wtp-repo-S-3.16.0.M3-20191121172023.zip</a></p>
<p>Path to vulnerable library: /.vscode/extensions/redhat.java-0.80.0/server/plugins/org.apache.log4j_1.2.15.v201012070815.jar</p>
<p>
Dependency Hierarchy:
- :x: **org.apache.log4j_1.2.15.v201012070815.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kijacode/dotfiles/commit/a1134e0d69c95f5b6b6bbfb161bbf263699292f0">a1134e0d69c95f5b6b6bbfb161bbf263699292f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-17571 (High) detected in org.apache.log4j_1.2.15.v201012070815.jar - ## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>org.apache.log4j_1.2.15.v201012070815.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://eclipse.mirror.garr.it/mirrors/eclipse/webtools/downloads/drops/R3.16.0/S-3.16.0.M3-20191121172023/wtp-repo-S-3.16.0.M3-20191121172023.zip">http://eclipse.mirror.garr.it/mirrors/eclipse/webtools/downloads/drops/R3.16.0/S-3.16.0.M3-20191121172023/wtp-repo-S-3.16.0.M3-20191121172023.zip</a></p>
<p>Path to vulnerable library: /.vscode/extensions/redhat.java-0.80.0/server/plugins/org.apache.log4j_1.2.15.v201012070815.jar</p>
<p>
Dependency Hierarchy:
- :x: **org.apache.log4j_1.2.15.v201012070815.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kijacode/dotfiles/commit/a1134e0d69c95f5b6b6bbfb161bbf263699292f0">a1134e0d69c95f5b6b6bbfb161bbf263699292f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in org apache jar cve high severity vulnerability vulnerable library org apache jar library home page a href path to vulnerable library vscode extensions redhat java server plugins org apache jar dependency hierarchy x org apache jar vulnerable library found in head commit a href found in base branch main vulnerability details included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution manual javadoc step up your open source security game with mend
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.