Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
290,675
8,902,535,469
IssuesEvent
2019-01-17 07:52:49
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
simulmedia.bamboohr.com - site is not usable
browser-firefox os-linux priority-normal type-tracking-protection-basic
<!-- @browser: Firefox 66.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://simulmedia.bamboohr.com/jobs/view.php?id=55 **Browser / Version**: Firefox 66.0 **Operating System**: Linux **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: I cannot apply to a job on this site when tracking blocking is enabled **Steps to Reproduce**: Filled out the application here, tried to submit, the application said that I had not uploaded a resume when I had. Disabling tracking blocking worked. [![Screenshot Description](https://webcompat.com/uploads/2019/1/48c3c0d9-4e33-4b8a-9f1c-e7de0b8e891c-thumb.jpeg)](https://webcompat.com/uploads/2019/1/48c3c0d9-4e33-4b8a-9f1c-e7de0b8e891c.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190109092644</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: true</li><li>channel: nightly</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "The resource at https://www.googletagmanager.com/gtm.js?id=GTM-ZC3S was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://graph.facebook.com/?fields=share&id=https%3A%2F%2Fsimulmedia.bamboohr.com%2Fjobs%2Fview.php%3Fid%3D55%3Fsource%3Dfacebook&format=json. (Reason: CORS request did not succeed)."]', u'[JavaScript Warning: "This site appears to use a scroll-linked positioning effect. This may not work well with asynchronous panning; see https://developer.mozilla.org/docs/Mozilla/Performance/ScrollLinkedEffects for further details and to join the discussion on related tools and features!" {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://graph.facebook.com/?fields=share&id=https%3A%2F%2Fsimulmedia.bamboohr.com%2Fjobs%2Fview.php%3Fid%3D55%3Fsource%3Dfacebook&format=json was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://platform.linkedin.com/in.js was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://js-agent.newrelic.com/nr-1099.min.js was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]'] </pre> </details> Reported by @yoasif _From [webcompat.com](https://webcompat.com/) with ❀️_
1.0
simulmedia.bamboohr.com - site is not usable - <!-- @browser: Firefox 66.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://simulmedia.bamboohr.com/jobs/view.php?id=55 **Browser / Version**: Firefox 66.0 **Operating System**: Linux **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: I cannot apply to a job on this site when tracking blocking is enabled **Steps to Reproduce**: Filled out the application here, tried to submit, the application said that I had not uploaded a resume when I had. Disabling tracking blocking worked. [![Screenshot Description](https://webcompat.com/uploads/2019/1/48c3c0d9-4e33-4b8a-9f1c-e7de0b8e891c-thumb.jpeg)](https://webcompat.com/uploads/2019/1/48c3c0d9-4e33-4b8a-9f1c-e7de0b8e891c.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190109092644</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: true</li><li>channel: nightly</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "The resource at https://www.googletagmanager.com/gtm.js?id=GTM-ZC3S was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://graph.facebook.com/?fields=share&id=https%3A%2F%2Fsimulmedia.bamboohr.com%2Fjobs%2Fview.php%3Fid%3D55%3Fsource%3Dfacebook&format=json. (Reason: CORS request did not succeed)."]', u'[JavaScript Warning: "This site appears to use a scroll-linked positioning effect. This may not work well with asynchronous panning; see https://developer.mozilla.org/docs/Mozilla/Performance/ScrollLinkedEffects for further details and to join the discussion on related tools and features!" {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://graph.facebook.com/?fields=share&id=https%3A%2F%2Fsimulmedia.bamboohr.com%2Fjobs%2Fview.php%3Fid%3D55%3Fsource%3Dfacebook&format=json was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://platform.linkedin.com/in.js was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]', u'[JavaScript Warning: "The resource at https://js-agent.newrelic.com/nr-1099.min.js was blocked because content blocking is enabled." {file: "https://simulmedia.bamboohr.com/jobs/view.php?id=55" line: 0}]'] </pre> </details> Reported by @yoasif _From [webcompat.com](https://webcompat.com/) with ❀️_
non_process
simulmedia bamboohr com site is not usable url browser version firefox operating system linux tested another browser no problem type site is not usable description i cannot apply to a job on this site when tracking blocking is enabled steps to reproduce filled out the application here tried to submit the application said that i had not uploaded a resume when i had disabling tracking blocking worked browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all true channel nightly console messages u u u u u reported by yoasif from with ❀️
0
2,386
5,187,641,472
IssuesEvent
2017-01-20 17:24:42
Alfresco/alfresco-ng2-components
https://api.github.com/repos/Alfresco/alfresco-ng2-components
closed
A start form is shown as "Nameless task" which is incorrect
bug comp: activiti-processList comp: activiti/form in progress
**Type of issue:** (check with "[x]") ``` - [ ] New feature request - [ x] Bug - [ ] Support request ``` **Current behavior:** A start form is shown as "Nameless task" **Expected behavior:** Either hide that or replace "Nameless task" with something like "Start Form". **Steps to reproduce the issue:** Start a process with a start form and observe this on start form **Component name and version:** <activiti-start-form> in ng2-activiti-form in 1.0.0 **Browser and version:** All
1.0
A start form is shown as "Nameless task" which is incorrect - **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [ x] Bug - [ ] Support request ``` **Current behavior:** A start form is shown as "Nameless task" **Expected behavior:** Either hide that or replace "Nameless task" with something like "Start Form". **Steps to reproduce the issue:** Start a process with a start form and observe this on start form **Component name and version:** <activiti-start-form> in ng2-activiti-form in 1.0.0 **Browser and version:** All
process
a start form is shown as nameless task which is incorrect type of issue check with new feature request bug support request current behavior a start form is shown as nameless task expected behavior either hide that or replace nameless task with something like start form steps to reproduce the issue start a process with a start form and observe this on start form component name and version in activiti form in browser and version all
1
62,625
12,228,123,782
IssuesEvent
2020-05-03 18:03:40
bstkr/interactive-movie
https://api.github.com/repos/bstkr/interactive-movie
closed
Add Style guide for the web page
non-code style
Design visual prototypes for the different UI elements of the page like: - Timeline - Credits, Impressum, Homepage - Menu buttons - Decision page - etc
1.0
Add Style guide for the web page - Design visual prototypes for the different UI elements of the page like: - Timeline - Credits, Impressum, Homepage - Menu buttons - Decision page - etc
non_process
add style guide for the web page design visual prototypes for the different ui elements of the page like timeline credits impressum homepage menu buttons decision page etc
0
287,879
24,870,545,558
IssuesEvent
2022-10-27 14:56:20
LE2HE/coding
https://api.github.com/repos/LE2HE/coding
closed
거짓말
coding test
BACKJOON ========== 1043번 --------- > 1번째 쀄에 μ‚¬λžŒ 수 Nκ³Ό νŒŒν‹° 수 M이 μ£Όμ–΄μ§„λ‹€. > 2번째 쀄뢀터 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€. > 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜κ°€ λ¨Όμ € μ£Όμ–΄μ§€κ³ , κ·Έ 개수만큼 μ‚¬λžŒλ“€μ˜ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€. > 3번째 μ€„μ—μ„œ 각 νŒŒν‹°λ§ˆλ‹€ μ˜€λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ 같은 λ°©μ‹μœΌλ‘œ μ£Όμ–΄μ§„λ‹€. > 거짓말 쟁이둜 μ•Œλ €μ§€μ§€ μ•ŠμœΌλ©΄μ„œ, κ³Όμž₯된 이야기λ₯Ό ν•  수 μžˆλŠ” νŒŒν‹° 개수의 μ΅œλŒ“κ°’μ„ κ΅¬ν•˜λŠ” ν”„λ‘œκ·Έλž¨μ„ μž‘μ„±ν•˜μ‹œμ˜€. > > 링크 : [1043](https://www.acmicpc.net/problem/1043)
1.0
거짓말 - BACKJOON ========== 1043번 --------- > 1번째 쀄에 μ‚¬λžŒ 수 Nκ³Ό νŒŒν‹° 수 M이 μ£Όμ–΄μ§„λ‹€. > 2번째 쀄뢀터 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€. > 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜κ°€ λ¨Όμ € μ£Όμ–΄μ§€κ³ , κ·Έ 개수만큼 μ‚¬λžŒλ“€μ˜ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€. > 3번째 μ€„μ—μ„œ 각 νŒŒν‹°λ§ˆλ‹€ μ˜€λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ 같은 λ°©μ‹μœΌλ‘œ μ£Όμ–΄μ§„λ‹€. > 거짓말 쟁이둜 μ•Œλ €μ§€μ§€ μ•ŠμœΌλ©΄μ„œ, κ³Όμž₯된 이야기λ₯Ό ν•  수 μžˆλŠ” νŒŒν‹° 개수의 μ΅œλŒ“κ°’μ„ κ΅¬ν•˜λŠ” ν”„λ‘œκ·Έλž¨μ„ μž‘μ„±ν•˜μ‹œμ˜€. > > 링크 : [1043](https://www.acmicpc.net/problem/1043)
non_process
거짓말 backjoon 쀄에 μ‚¬λžŒ 수 nκ³Ό νŒŒν‹° 수 m이 μ£Όμ–΄μ§„λ‹€ 쀄뢀터 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€ 진싀을 μ•„λŠ” μ‚¬λžŒμ˜ μˆ˜κ°€ λ¨Όμ € μ£Όμ–΄μ§€κ³  κ·Έ 개수만큼 μ‚¬λžŒλ“€μ˜ λ²ˆν˜Έκ°€ μ£Όμ–΄μ§„λ‹€ μ€„μ—μ„œ 각 νŒŒν‹°λ§ˆλ‹€ μ˜€λŠ” μ‚¬λžŒμ˜ μˆ˜μ™€ λ²ˆν˜Έκ°€ 같은 λ°©μ‹μœΌλ‘œ μ£Όμ–΄μ§„λ‹€ 거짓말 쟁이둜 μ•Œλ €μ§€μ§€ μ•ŠμœΌλ©΄μ„œ κ³Όμž₯된 이야기λ₯Ό ν•  수 μžˆλŠ” νŒŒν‹° 개수의 μ΅œλŒ“κ°’μ„ κ΅¬ν•˜λŠ” ν”„λ‘œκ·Έλž¨μ„ μž‘μ„±ν•˜μ‹œμ˜€ 링크
0
10,125
13,044,162,313
IssuesEvent
2020-07-29 03:47:31
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `TimeStringTimeDiff` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `TimeStringTimeDiff` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `TimeStringTimeDiff` from TiDB - ## Description Port the scalar function `TimeStringTimeDiff` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function timestringtimediff from tidb description port the scalar function timestringtimediff from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
1
19,164
25,262,670,375
IssuesEvent
2022-11-16 00:35:54
googleapis/synthtool
https://api.github.com/repos/googleapis/synthtool
opened
[Python] nox does not install dependencies with hash
priority: p2 type: process
nox installs are currently done with `session.install`. We should determine if this is safe, and if not, we should use hashed installs.
1.0
[Python] nox does not install dependencies with hash - nox installs are currently done with `session.install`. We should determine if this is safe, and if not, we should use hashed installs.
process
nox does not install dependencies with hash nox installs are currently done with session install we should determine if this is safe and if not we should use hashed installs
1
193
2,597,140,193
IssuesEvent
2015-02-21 03:47:56
opattison/olivermakes
https://api.github.com/repos/opattison/olivermakes
opened
Write documentation for images
content maintenance process
This is one of the more difficult parts of creating content for the site, so I want to have the process explained.
1.0
Write documentation for images - This is one of the more difficult parts of creating content for the site, so I want to have the process explained.
process
write documentation for images this is one of the more difficult parts of creating content for the site so i want to have the process explained
1
224,630
7,471,944,554
IssuesEvent
2018-04-03 10:56:56
ballerina-lang/composer
https://api.github.com/repos/ballerina-lang/composer
closed
Transformer Automation Test
Imported Priority/High Transform Stmt component/Composer
- [x] Direct mapping creation - [ ] Direct mapping removal - [ ] Function mapping creation - [ ] Function mapping removal
1.0
Transformer Automation Test - - [x] Direct mapping creation - [ ] Direct mapping removal - [ ] Function mapping creation - [ ] Function mapping removal
non_process
transformer automation test direct mapping creation direct mapping removal function mapping creation function mapping removal
0
78,254
15,569,949,786
IssuesEvent
2021-03-17 01:22:15
jrrk/riscv-linux
https://api.github.com/repos/jrrk/riscv-linux
opened
CVE-2020-25212 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi
security vulnerability
## CVE-2020-25212 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-amlogicv4.18</b>, <b>aspeedaspeed-4.19-devicetree-no-fsi</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A TOCTOU mismatch in the NFS client code in the Linux kernel before 5.8.3 could be used by local attackers to corrupt memory or possibly have unspecified other impact because a size check is in fs/nfs/nfs4proc.c instead of fs/nfs/nfs4xdr.c, aka CID-b4487b935452. <p>Publish Date: 2020-09-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25212>CVE-2020-25212</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25212">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25212</a></p> <p>Release Date: 2020-09-09</p> <p>Fix Resolution: 5.8.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-25212 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi - ## CVE-2020-25212 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-amlogicv4.18</b>, <b>aspeedaspeed-4.19-devicetree-no-fsi</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A TOCTOU mismatch in the NFS client code in the Linux kernel before 5.8.3 could be used by local attackers to corrupt memory or possibly have unspecified other impact because a size check is in fs/nfs/nfs4proc.c instead of fs/nfs/nfs4xdr.c, aka CID-b4487b935452. <p>Publish Date: 2020-09-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25212>CVE-2020-25212</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25212">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25212</a></p> <p>Release Date: 2020-09-09</p> <p>Fix Resolution: 5.8.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux aspeedaspeed devicetree no fsi cve high severity vulnerability vulnerable libraries linux aspeedaspeed devicetree no fsi vulnerability details a toctou mismatch in the nfs client code in the linux kernel before could be used by local attackers to corrupt memory or possibly have unspecified other impact because a size check is in fs nfs c instead of fs nfs c aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
21,977
30,468,611,276
IssuesEvent
2023-07-17 12:13:38
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Add function for display name for left-hand-side of a join
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
We need a function to get the display names circled here: ### In first stage of the query <img width="929" alt="242121898-ff6016dc-7c5b-4c56-84a2-15bf8f68f0fb" src="https://github.com/metabase/metabase/assets/1455846/22593b73-d6aa-46d9-b349-6ed1e841e27e"> ### In second stage of the query <img width="974" alt="Screenshot 2023-06-29 121708" src="https://github.com/metabase/metabase/assets/1455846/d5314d68-62d5-4d77-b0ee-27cc0ee2073d"> ### With a source Saved Question named `Orders` <img width="973" alt="Screenshot 2023-06-29 121957" src="https://github.com/metabase/metabase/assets/1455846/0581c337-3816-43f4-8c18-6c54ac66df07"> ### What we need It appears that the rules are as follows: 1. If this is the first join in the first stage of a query, and the query uses a `:source-table`, then use the display name for the source Table 2. Otherwise use `Previous results`. This function needs to be usable while we are in the process of constructing a join in the context of a given stage, but also needs to work for rendering existing joins. The function should be something like this ```ts export function joinLHSDisplayName(query: Query, stageIndex: number, join?: Join): string { ... } ``` where `join` is passed for existing joins, but not for ones we are currently building. New joins get appended after any existing ones, so it would be safe to assume that if there are any other joins in the current stage, this **will not** be the first join in the stage. If a join is passed, we need to check whether it's the first join in the first stage of a source-table query or not.
1.0
[MLv2] Add function for display name for left-hand-side of a join - We need a function to get the display names circled here: ### In first stage of the query <img width="929" alt="242121898-ff6016dc-7c5b-4c56-84a2-15bf8f68f0fb" src="https://github.com/metabase/metabase/assets/1455846/22593b73-d6aa-46d9-b349-6ed1e841e27e"> ### In second stage of the query <img width="974" alt="Screenshot 2023-06-29 121708" src="https://github.com/metabase/metabase/assets/1455846/d5314d68-62d5-4d77-b0ee-27cc0ee2073d"> ### With a source Saved Question named `Orders` <img width="973" alt="Screenshot 2023-06-29 121957" src="https://github.com/metabase/metabase/assets/1455846/0581c337-3816-43f4-8c18-6c54ac66df07"> ### What we need It appears that the rules are as follows: 1. If this is the first join in the first stage of a query, and the query uses a `:source-table`, then use the display name for the source Table 2. Otherwise use `Previous results`. This function needs to be usable while we are in the process of constructing a join in the context of a given stage, but also needs to work for rendering existing joins. The function should be something like this ```ts export function joinLHSDisplayName(query: Query, stageIndex: number, join?: Join): string { ... } ``` where `join` is passed for existing joins, but not for ones we are currently building. New joins get appended after any existing ones, so it would be safe to assume that if there are any other joins in the current stage, this **will not** be the first join in the stage. If a join is passed, we need to check whether it's the first join in the first stage of a source-table query or not.
process
add function for display name for left hand side of a join we need a function to get the display names circled here in first stage of the query img width alt src in second stage of the query img width alt screenshot src with a source saved question named orders img width alt screenshot src what we need it appears that the rules are as follows if this is the first join in the first stage of a query and the query uses a source table then use the display name for the source table otherwise use previous results this function needs to be usable while we are in the process of constructing a join in the context of a given stage but also needs to work for rendering existing joins the function should be something like this ts export function joinlhsdisplayname query query stageindex number join join string where join is passed for existing joins but not for ones we are currently building new joins get appended after any existing ones so it would be safe to assume that if there are any other joins in the current stage this will not be the first join in the stage if a join is passed we need to check whether it s the first join in the first stage of a source table query or not
1
2,869
5,828,732,449
IssuesEvent
2017-05-08 12:55:50
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
opened
Several Process tests on Windows failing with "Couldn't connect to remote machine"
area-System.Diagnostics.Process test bug
https://ci.dot.net/job/dotnet_corefx/job/master/job/windows_nt_release_prtest/7565/consoleText ``` System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(265,0): at System.Diagnostics.NtProcessManager.GetProcessIds(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(33,0): at System.Diagnostics.ProcessManager.IsProcessRunning(Int32 processId, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1055,0): at System.Diagnostics.Process.GetProcessById(Int32 processId, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(917,0): at System.Diagnostics.Tests.ProcessTests.<GetTestProcess>d__76.MoveNext() D:\j\workspace\windows_nt_re---37265eab\src\System.Linq\src\System\Linq\Select.cs(133,0): at System.Linq.Enumerable.SelectEnumerableIterator`2.MoveNext() ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"8363c42cc7434644b263584fc831a4bb\") [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1107,0): at System.Diagnostics.Process.GetProcesses(String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(27,0): at System.Diagnostics.Process.GetProcessesByName(String processName, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(870,0): at System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(String machineName) ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"\\\\e41070783511438199c628aa28bd8d16\") [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1107,0): at System.Diagnostics.Process.GetProcesses(String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(27,0): at System.Diagnostics.Process.GetProcessesByName(String processName, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(870,0): at System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(String machineName) ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) ``` cc: @danmosemsft, @hughbe
1.0
Several Process tests on Windows failing with "Couldn't connect to remote machine" - https://ci.dot.net/job/dotnet_corefx/job/master/job/windows_nt_release_prtest/7565/consoleText ``` System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(265,0): at System.Diagnostics.NtProcessManager.GetProcessIds(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(33,0): at System.Diagnostics.ProcessManager.IsProcessRunning(Int32 processId, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1055,0): at System.Diagnostics.Process.GetProcessById(Int32 processId, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(917,0): at System.Diagnostics.Tests.ProcessTests.<GetTestProcess>d__76.MoveNext() D:\j\workspace\windows_nt_re---37265eab\src\System.Linq\src\System\Linq\Select.cs(133,0): at System.Linq.Enumerable.SelectEnumerableIterator`2.MoveNext() ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"8363c42cc7434644b263584fc831a4bb\") [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1107,0): at System.Diagnostics.Process.GetProcesses(String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(27,0): at System.Diagnostics.Process.GetProcessesByName(String processName, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(870,0): at System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(String machineName) ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"\\\\e41070783511438199c628aa28bd8d16\") [FAIL] System.InvalidOperationException : Couldn't connect to remote machine. ---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed. Stack Trace: D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(521,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.cs(1107,0): at System.Diagnostics.Process.GetProcesses(String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(27,0): at System.Diagnostics.Process.GetProcessesByName(String processName, String machineName) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\tests\ProcessTests.cs(870,0): at System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(String machineName) ----- Inner Stack Trace ----- D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(550,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(PerformanceCounterLib library) D:\j\workspace\windows_nt_re---37265eab\src\System.Diagnostics.Process\src\System\Diagnostics\ProcessManager.Windows.cs(511,0): at System.Diagnostics.NtProcessManager.GetProcessInfos(String machineName, Boolean isRemoteMachine) ``` cc: @danmosemsft, @hughbe
process
several process tests on windows failing with couldn t connect to remote machine system diagnostics tests processtests testprocessonremotemachinewindows system invalidoperationexception couldn t connect to remote machine system invalidoperationexception process performance counter is disabled so the requested operation cannot be performed stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessids string machinename boolean isremotemachine d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics processmanager isprocessrunning processid string machinename d j workspace windows nt re src system diagnostics process src system diagnostics process cs at system diagnostics process getprocessbyid processid string machinename d j workspace windows nt re src system diagnostics process tests processtests cs at system diagnostics tests processtests d movenext d j workspace windows nt re src system linq src system linq select cs at system linq enumerable selectenumerableiterator movenext inner stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos performancecounterlib library d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine system diagnostics tests processtests getprocessesbyname remotemachinenamewindows returnsexpected machinename system invalidoperationexception couldn t connect to remote machine system invalidoperationexception process performance counter is disabled so the requested operation cannot be performed stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine d j workspace windows nt re src system diagnostics process src system diagnostics process cs at system diagnostics process getprocesses string machinename d j workspace windows nt re src system diagnostics process src system diagnostics process windows cs at system diagnostics process getprocessesbyname string processname string machinename d j workspace windows nt re src system diagnostics process tests processtests cs at system diagnostics tests processtests getprocessesbyname processnamemachinename returnsexpected string machinename inner stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos performancecounterlib library d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine system diagnostics tests processtests getprocessesbyname remotemachinenamewindows returnsexpected machinename system invalidoperationexception couldn t connect to remote machine system invalidoperationexception process performance counter is disabled so the requested operation cannot be performed stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine d j workspace windows nt re src system diagnostics process src system diagnostics process cs at system diagnostics process getprocesses string machinename d j workspace windows nt re src system diagnostics process src system diagnostics process windows cs at system diagnostics process getprocessesbyname string processname string machinename d j workspace windows nt re src system diagnostics process tests processtests cs at system diagnostics tests processtests getprocessesbyname processnamemachinename returnsexpected string machinename inner stack trace d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos performancecounterlib library d j workspace windows nt re src system diagnostics process src system diagnostics processmanager windows cs at system diagnostics ntprocessmanager getprocessinfos string machinename boolean isremotemachine cc danmosemsft hughbe
1
8,103
11,297,073,309
IssuesEvent
2020-01-17 04:27:23
kubeflow/testing
https://api.github.com/repos/kubeflow/testing
closed
Install GitHub and other secrets for running Kubeflow's Tekton CD pipelines in release infra
area/engprod effort/3-days kind/process priority/p1
We want to setup continuous building of our docker images and updating of our kustomize manifests (#450). We have tekton pipelines for doing this. To run those pipelines we need several secrets setup https://github.com/kubeflow/kubeflow/tree/master/components/base#secrets in our release infrastructure https://github.com/kubeflow/testing/tree/master/release-infra This issue tracks setting up the release infrastructure with the required secrets.
1.0
Install GitHub and other secrets for running Kubeflow's Tekton CD pipelines in release infra - We want to setup continuous building of our docker images and updating of our kustomize manifests (#450). We have tekton pipelines for doing this. To run those pipelines we need several secrets setup https://github.com/kubeflow/kubeflow/tree/master/components/base#secrets in our release infrastructure https://github.com/kubeflow/testing/tree/master/release-infra This issue tracks setting up the release infrastructure with the required secrets.
process
install github and other secrets for running kubeflow s tekton cd pipelines in release infra we want to setup continuous building of our docker images and updating of our kustomize manifests we have tekton pipelines for doing this to run those pipelines we need several secrets setup in our release infrastructure this issue tracks setting up the release infrastructure with the required secrets
1
3,942
6,885,527,106
IssuesEvent
2017-11-21 16:21:48
promarcel/phantombot-musicplayer
https://api.github.com/repos/promarcel/phantombot-musicplayer
closed
Cannot access "Settings" will end in exception!
compatibility in process
#### Summary kann auf settings nicht zugreifen #### What is the current bug behavior? klicke auf settings und erroer erscheint #### What should be the expected behavior? Settings einstellen ! #### How to reproduce? kommt immer wenn ich auf einstellen gehe #### Detailed information / Screenshots win 10 neuste v1.0 **Log files:** ``` Informationen ΓΌber das Aufrufen von JIT-Debuggen anstelle dieses Dialogfelds finden Sie am Ende dieser Meldung. ************** Ausnahmetext ************** System.NullReferenceException: Der Objektverweis wurde nicht auf eine Objektinstanz festgelegt. bei PB_MusicPlayer.MainWindow.SettingsMenu_Click(Object sender, EventArgs e) in C:\Users\Marcel Deglau\Repositorys\PB-MusicPlayer\PB-MusicPlayer\MainWindow.cs:Zeile 125. bei System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) bei System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) bei System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) bei System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) bei System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) bei System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) bei System.Windows.Forms.Control.WndProc(Message& m) bei System.Windows.Forms.ToolStrip.WndProc(Message& m) bei System.Windows.Forms.MenuStrip.WndProc(Message& m) bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) ************** Geladene Assemblys ************** mscorlib Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2600.0 built by: NET471REL1LAST. CodeBase: file:///C:/Windows/Microsoft.NET/Framework64/v4.0.30319/mscorlib.dll. ---------------------------------------- PB-MusicPlayer Assembly-Version: 1.0.0.0. Win32-Version: 1.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/PB-MusicPlayer.exe. ---------------------------------------- System.Windows.Forms Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll. ---------------------------------------- System Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll. ---------------------------------------- System.Drawing Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll. ---------------------------------------- CefSharp.Core Assembly-Version: 57.0.0.0. Win32-Version: . CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.Core.DLL. ---------------------------------------- System.Configuration Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Configuration/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Configuration.dll. ---------------------------------------- System.Core Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2600.0 built by: NET471REL1LAST. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Core/v4.0_4.0.0.0__b77a5c561934e089/System.Core.dll. ---------------------------------------- System.Xml Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll. ---------------------------------------- CefSharp Assembly-Version: 57.0.0.0. Win32-Version: 57.0.0.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.DLL. ---------------------------------------- CefSharp.WinForms Assembly-Version: 57.0.0.0. Win32-Version: 57.0.0.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.WinForms.DLL. ---------------------------------------- System.ServiceModel Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.ServiceModel/v4.0_4.0.0.0__b77a5c561934e089/System.ServiceModel.dll. ---------------------------------------- mscorlib.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/mscorlib.resources/v4.0_4.0.0.0_de_b77a5c561934e089/mscorlib.resources.dll. ---------------------------------------- System.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.resources.dll. ---------------------------------------- System.Windows.Forms.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.Windows.Forms.resources.dll. ---------------------------------------- ************** JIT-Debuggen ************** Um das JIT-Debuggen (Just-In-Time) zu aktivieren, muss in der Konfigurationsdatei der Anwendung oder des Computers (machine.config) der jitDebugging-Wert im Abschnitt system.windows.forms festgelegt werden. Die Anwendung muss mit aktiviertem Debuggen kompiliert werden. Zum Beispiel: <configuration> <system.windows.forms jitDebugging="true" /> </configuration> Wenn das JIT-Debuggen aktiviert ist, werden alle nicht behandelten Ausnahmen an den JIT-Debugger gesendet, der auf dem Computer registriert ist, und nicht in diesem Dialogfeld behandelt. ```
1.0
Cannot access "Settings" will end in exception! - #### Summary kann auf settings nicht zugreifen #### What is the current bug behavior? klicke auf settings und erroer erscheint #### What should be the expected behavior? Settings einstellen ! #### How to reproduce? kommt immer wenn ich auf einstellen gehe #### Detailed information / Screenshots win 10 neuste v1.0 **Log files:** ``` Informationen ΓΌber das Aufrufen von JIT-Debuggen anstelle dieses Dialogfelds finden Sie am Ende dieser Meldung. ************** Ausnahmetext ************** System.NullReferenceException: Der Objektverweis wurde nicht auf eine Objektinstanz festgelegt. bei PB_MusicPlayer.MainWindow.SettingsMenu_Click(Object sender, EventArgs e) in C:\Users\Marcel Deglau\Repositorys\PB-MusicPlayer\PB-MusicPlayer\MainWindow.cs:Zeile 125. bei System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) bei System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) bei System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) bei System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) bei System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) bei System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) bei System.Windows.Forms.Control.WndProc(Message& m) bei System.Windows.Forms.ToolStrip.WndProc(Message& m) bei System.Windows.Forms.MenuStrip.WndProc(Message& m) bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) ************** Geladene Assemblys ************** mscorlib Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2600.0 built by: NET471REL1LAST. CodeBase: file:///C:/Windows/Microsoft.NET/Framework64/v4.0.30319/mscorlib.dll. ---------------------------------------- PB-MusicPlayer Assembly-Version: 1.0.0.0. Win32-Version: 1.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/PB-MusicPlayer.exe. ---------------------------------------- System.Windows.Forms Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll. ---------------------------------------- System Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll. ---------------------------------------- System.Drawing Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll. ---------------------------------------- CefSharp.Core Assembly-Version: 57.0.0.0. Win32-Version: . CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.Core.DLL. ---------------------------------------- System.Configuration Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Configuration/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Configuration.dll. ---------------------------------------- System.Core Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2600.0 built by: NET471REL1LAST. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Core/v4.0_4.0.0.0__b77a5c561934e089/System.Core.dll. ---------------------------------------- System.Xml Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll. ---------------------------------------- CefSharp Assembly-Version: 57.0.0.0. Win32-Version: 57.0.0.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.DLL. ---------------------------------------- CefSharp.WinForms Assembly-Version: 57.0.0.0. Win32-Version: 57.0.0.0. CodeBase: file:///C:/Users/Jens/Downloads/phantombot-musicplayer-version1.0-x64/CefSharp.WinForms.DLL. ---------------------------------------- System.ServiceModel Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.ServiceModel/v4.0_4.0.0.0__b77a5c561934e089/System.ServiceModel.dll. ---------------------------------------- mscorlib.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/mscorlib.resources/v4.0_4.0.0.0_de_b77a5c561934e089/mscorlib.resources.dll. ---------------------------------------- System.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.resources.dll. ---------------------------------------- System.Windows.Forms.resources Assembly-Version: 4.0.0.0. Win32-Version: 4.7.2556.0 built by: NET471REL1. CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.Windows.Forms.resources.dll. ---------------------------------------- ************** JIT-Debuggen ************** Um das JIT-Debuggen (Just-In-Time) zu aktivieren, muss in der Konfigurationsdatei der Anwendung oder des Computers (machine.config) der jitDebugging-Wert im Abschnitt system.windows.forms festgelegt werden. Die Anwendung muss mit aktiviertem Debuggen kompiliert werden. Zum Beispiel: <configuration> <system.windows.forms jitDebugging="true" /> </configuration> Wenn das JIT-Debuggen aktiviert ist, werden alle nicht behandelten Ausnahmen an den JIT-Debugger gesendet, der auf dem Computer registriert ist, und nicht in diesem Dialogfeld behandelt. ```
process
cannot access settings will end in exception summary kann auf settings nicht zugreifen what is the current bug behavior klicke auf settings und erroer erscheint what should be the expected behavior settings einstellen how to reproduce kommt immer wenn ich auf einstellen gehe detailed information screenshots win neuste log files informationen ΓΌber das aufrufen von jit debuggen anstelle dieses dialogfelds finden sie am ende dieser meldung ausnahmetext system nullreferenceexception der objektverweis wurde nicht auf eine objektinstanz festgelegt bei pb musicplayer mainwindow settingsmenu click object sender eventargs e in c users marcel deglau repositorys pb musicplayer pb musicplayer mainwindow cs zeile bei system windows forms toolstripitem raiseevent object key eventargs e bei system windows forms toolstripmenuitem onclick eventargs e bei system windows forms toolstripitem handleclick eventargs e bei system windows forms toolstripitem handlemouseup mouseeventargs e bei system windows forms toolstrip onmouseup mouseeventargs mea bei system windows forms control wmmouseup message m mousebuttons button clicks bei system windows forms control wndproc message m bei system windows forms toolstrip wndproc message m bei system windows forms menustrip wndproc message m bei system windows forms nativewindow callback intptr hwnd msg intptr wparam intptr lparam geladene assemblys mscorlib assembly version version built by codebase file c windows microsoft net mscorlib dll pb musicplayer assembly version version codebase file c users jens downloads phantombot musicplayer pb musicplayer exe system windows forms assembly version version built by codebase file c windows microsoft net assembly gac msil system windows forms system windows forms dll system assembly version version built by codebase file c windows microsoft net assembly gac msil system system dll system drawing assembly version version built by codebase file c windows microsoft net assembly gac msil system drawing system drawing dll cefsharp core assembly version version codebase file c users jens downloads phantombot musicplayer cefsharp core dll system configuration assembly version version built by codebase file c windows microsoft net assembly gac msil system configuration system configuration dll system core assembly version version built by codebase file c windows microsoft net assembly gac msil system core system core dll system xml assembly version version built by codebase file c windows microsoft net assembly gac msil system xml system xml dll cefsharp assembly version version codebase file c users jens downloads phantombot musicplayer cefsharp dll cefsharp winforms assembly version version codebase file c users jens downloads phantombot musicplayer cefsharp winforms dll system servicemodel assembly version version built by codebase file c windows microsoft net assembly gac msil system servicemodel system servicemodel dll mscorlib resources assembly version version built by codebase file c windows microsoft net assembly gac msil mscorlib resources de mscorlib resources dll system resources assembly version version built by codebase file c windows microsoft net assembly gac msil system resources de system resources dll system windows forms resources assembly version version built by codebase file c windows microsoft net assembly gac msil system windows forms resources de system windows forms resources dll jit debuggen um das jit debuggen just in time zu aktivieren muss in der konfigurationsdatei der anwendung oder des computers machine config der jitdebugging wert im abschnitt system windows forms festgelegt werden die anwendung muss mit aktiviertem debuggen kompiliert werden zum beispiel wenn das jit debuggen aktiviert ist werden alle nicht behandelten ausnahmen an den jit debugger gesendet der auf dem computer registriert ist und nicht in diesem dialogfeld behandelt
1
509,785
14,743,443,688
IssuesEvent
2021-01-07 13:55:42
rubrikinc/rubrik-sdk-for-python
https://api.github.com/repos/rubrikinc/rubrik-sdk-for-python
closed
change text in quick-start.md for authentication using token
area-polaris exp-beginner kind-bug kind-docs priority-p1
currently in the quick-start.md the following is written under the authentication section: Or by passing the node IP and API Token as follows: ``` node_ip = "192.168.0.100" api_token = "jf2jma02k3anms0" rubrik = rubrik_cdm.Connect(node_ip, api_token) ``` this is not working and it should be changed into: Or by passing the node IP and API Token as follows: ``` node_ip = "192.168.0.100" api_token = "jf2jma02k3anms0" rubrik = rubrik_cdm.Connect(node_ip, api_token=api_token) ```
1.0
change text in quick-start.md for authentication using token - currently in the quick-start.md the following is written under the authentication section: Or by passing the node IP and API Token as follows: ``` node_ip = "192.168.0.100" api_token = "jf2jma02k3anms0" rubrik = rubrik_cdm.Connect(node_ip, api_token) ``` this is not working and it should be changed into: Or by passing the node IP and API Token as follows: ``` node_ip = "192.168.0.100" api_token = "jf2jma02k3anms0" rubrik = rubrik_cdm.Connect(node_ip, api_token=api_token) ```
non_process
change text in quick start md for authentication using token currently in the quick start md the following is written under the authentication section or by passing the node ip and api token as follows node ip api token rubrik rubrik cdm connect node ip api token this is not working and it should be changed into or by passing the node ip and api token as follows node ip api token rubrik rubrik cdm connect node ip api token api token
0
492
2,935,220,385
IssuesEvent
2015-06-30 13:32:05
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
На Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅, Π½Π° Ρ„ΠΎΡ€ΠΌΠ΅ услуги, ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ скрытым ΠΏΠΎΠ»Π΅ с Ρ‚ΠΈΠΏΠΎΠΌ invisible
hi priority In process of testing test
сдСдствиС Π·Π°Π΄Π°Ρ‡ΠΈ: https://github.com/e-government-ua/i/issues/455
1.0
На Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅, Π½Π° Ρ„ΠΎΡ€ΠΌΠ΅ услуги, ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ скрытым ΠΏΠΎΠ»Π΅ с Ρ‚ΠΈΠΏΠΎΠΌ invisible - сдСдствиС Π·Π°Π΄Π°Ρ‡ΠΈ: https://github.com/e-government-ua/i/issues/455
process
Π½Π° Π³Π»Π°Π²Π½ΠΎΠΌ ΠΏΠΎΡ€Ρ‚Π°Π»Π΅ Π½Π° Ρ„ΠΎΡ€ΠΌΠ΅ услуги ΡΠ΄Π΅Π»Π°Ρ‚ΡŒ скрытым ΠΏΠΎΠ»Π΅ с Ρ‚ΠΈΠΏΠΎΠΌ invisible сдСдствиС Π·Π°Π΄Π°Ρ‡ΠΈ
1
200,964
15,167,500,297
IssuesEvent
2021-02-12 17:53:25
saltstack/salt
https://api.github.com/repos/saltstack/salt
closed
[TEST FAILURE] CMDRunRedirect test failure
P1 Test Failure
**Description** https://jenkins.saltproject.io/job/pr-centos7-py3-m2crypto-pytest-slow/job/master/ ``` tests.integration.states.test_cmd.CMDRunRedirectTest.test_run_unless_multiple_cmds (from pytest) ```
1.0
[TEST FAILURE] CMDRunRedirect test failure - **Description** https://jenkins.saltproject.io/job/pr-centos7-py3-m2crypto-pytest-slow/job/master/ ``` tests.integration.states.test_cmd.CMDRunRedirectTest.test_run_unless_multiple_cmds (from pytest) ```
non_process
cmdrunredirect test failure description tests integration states test cmd cmdrunredirecttest test run unless multiple cmds from pytest
0
73,205
3,409,295,176
IssuesEvent
2015-12-04 15:11:24
IQSS/dataverse
https://api.github.com/repos/IQSS/dataverse
closed
Dataset - Save Changes/Continue (Delete Files) Scroll To See Messages
Component: File Upload & Handling Component: UX & Upgrade Priority: High Status: Dev
- [x] Success msg for deleting files in the Upload + Edit Files view, displayed at top of page, and isn’t visible when the page stays on the files table. - [x] Fix dataset Save Changes scrolling, so that it doesn't jump the page to the top, when you're still on the edit view, unless there is a validation error to display. Related to #1865.
1.0
Dataset - Save Changes/Continue (Delete Files) Scroll To See Messages - - [x] Success msg for deleting files in the Upload + Edit Files view, displayed at top of page, and isn’t visible when the page stays on the files table. - [x] Fix dataset Save Changes scrolling, so that it doesn't jump the page to the top, when you're still on the edit view, unless there is a validation error to display. Related to #1865.
non_process
dataset save changes continue delete files scroll to see messages success msg for deleting files in the upload edit files view displayed at top of page and isn’t visible when the page stays on the files table fix dataset save changes scrolling so that it doesn t jump the page to the top when you re still on the edit view unless there is a validation error to display related to
0
67,330
27,802,710,759
IssuesEvent
2023-03-17 17:03:42
hashicorp/terraform-provider-aws
https://api.github.com/repos/hashicorp/terraform-provider-aws
closed
[Bug]: Api Gateway Domain Name being recreated when truststore is changed
bug service/apigateway
### Terraform Core Version 1.4.0 ### AWS Provider Version 4.58.0 ### Affected Resource(s) aws_api_gateway_domain_name ### Expected Behavior By changing the truststore path, it should update the resource and not recreate it, as it happens in the console. ### Actual Behavior When a change in the truststore path is detected, it forces a redeployment, where it deletes the current resource and creates a new one, and because it is a unique regional resource, it makes it impossible to use lifecycle definitions. ### Relevant Error/Panic Output Snippet _No response_ ### Terraform Configuration Files ``` resource "aws_api_gateway_domain_name" "example" { domain_name = "thezeroend.click" regional_certificate_arn = "arn:aws:acm:us-east-1:ACCOUNT_ID:certificate/CERTIFICATE_ID" endpoint_configuration { types = ["REGIONAL"] } mutual_tls_authentication { truststore_uri = "s3://testebucket-gw/domain/cert1.crt" } } ``` ### Steps to Reproduce 1. Create certificate in ACM by Amazon 2. Create domain name resource with mTLS pointing to an s3 bucket ex. s3://testebucket-gw/domain/cert1.crt 3. Run Plan & Apply 4. Change the s3 path in the parameters where you should look for the truststore ex. s3://testebucket-gw/domain/cert2.crt 5. Run Plan & Apply ### Debug Output _No response_ ### Panic Output _No response_ ### Important Factoids _No response_ ### References _No response_ ### Would you like to implement a fix? No
1.0
[Bug]: Api Gateway Domain Name being recreated when truststore is changed - ### Terraform Core Version 1.4.0 ### AWS Provider Version 4.58.0 ### Affected Resource(s) aws_api_gateway_domain_name ### Expected Behavior By changing the truststore path, it should update the resource and not recreate it, as it happens in the console. ### Actual Behavior When a change in the truststore path is detected, it forces a redeployment, where it deletes the current resource and creates a new one, and because it is a unique regional resource, it makes it impossible to use lifecycle definitions. ### Relevant Error/Panic Output Snippet _No response_ ### Terraform Configuration Files ``` resource "aws_api_gateway_domain_name" "example" { domain_name = "thezeroend.click" regional_certificate_arn = "arn:aws:acm:us-east-1:ACCOUNT_ID:certificate/CERTIFICATE_ID" endpoint_configuration { types = ["REGIONAL"] } mutual_tls_authentication { truststore_uri = "s3://testebucket-gw/domain/cert1.crt" } } ``` ### Steps to Reproduce 1. Create certificate in ACM by Amazon 2. Create domain name resource with mTLS pointing to an s3 bucket ex. s3://testebucket-gw/domain/cert1.crt 3. Run Plan & Apply 4. Change the s3 path in the parameters where you should look for the truststore ex. s3://testebucket-gw/domain/cert2.crt 5. Run Plan & Apply ### Debug Output _No response_ ### Panic Output _No response_ ### Important Factoids _No response_ ### References _No response_ ### Would you like to implement a fix? No
non_process
api gateway domain name being recreated when truststore is changed terraform core version aws provider version affected resource s aws api gateway domain name expected behavior by changing the truststore path it should update the resource and not recreate it as it happens in the console actual behavior when a change in the truststore path is detected it forces a redeployment where it deletes the current resource and creates a new one and because it is a unique regional resource it makes it impossible to use lifecycle definitions relevant error panic output snippet no response terraform configuration files resource aws api gateway domain name example domain name thezeroend click regional certificate arn arn aws acm us east account id certificate certificate id endpoint configuration types mutual tls authentication truststore uri testebucket gw domain crt steps to reproduce create certificate in acm by amazon create domain name resource with mtls pointing to an bucket ex testebucket gw domain crt run plan apply change the path in the parameters where you should look for the truststore ex testebucket gw domain crt run plan apply debug output no response panic output no response important factoids no response references no response would you like to implement a fix no
0
335,370
30,026,302,315
IssuesEvent
2023-06-27 06:26:05
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix paddle_tensor.test_paddle_tensor_property_shape
Sub Task Failing Test Paddle Frontend
| | | |---|---| |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix paddle_tensor.test_paddle_tensor_property_shape - | | | |---|---| |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5370032189/jobs/9741886618"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a>
non_process
fix paddle tensor test paddle tensor property shape paddle a href src tensorflow a href src numpy img src torch a href src jax img src
0
84,772
16,551,324,688
IssuesEvent
2021-05-28 08:55:21
google/web-stories-wp
https://api.github.com/repos/google/web-stories-wp
closed
Library: Avoid rendering inactive tabs
Group: Library Group: Media Group: Page Templates Group: Text Sets Group: Workspace P1 Pod: Pea Type: Code Quality Type: Enhancement Type: Performance
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Bug Description When opening the editor, we already load all text sets and templates (page layouts) despite those tabs not being open by default. That's **6MB** of data. As per #4433, same goes for inactive media tabs. There's not much detail there but seems like a good performance win. This also causes issues with tests. See #5945 ## Expected Behaviour Text sets and page layout chunks should only be loaded the first time they're being opened. ## Steps to Reproduce 1. Open Network tab in dev tools 1. Open the editor 2. Notice that all text sets and all templates are being loaded ## Screenshots <!-- If applicable, please add screenshots to help explain your problem. Bonus points for videos! --> ## Additional Context <!-- Please complete the following information. --> - Plugin Version: 1.3.0 - WordPress Version: 5.6 - Operating System: macOS Catalina - Browser: Chrome --- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance Criteria <!-- One or more bullet points for acceptance criteria. --> ## Implementation Brief <!-- One or more bullet points for how to technically implement the feature. -->
1.0
Library: Avoid rendering inactive tabs - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Bug Description When opening the editor, we already load all text sets and templates (page layouts) despite those tabs not being open by default. That's **6MB** of data. As per #4433, same goes for inactive media tabs. There's not much detail there but seems like a good performance win. This also causes issues with tests. See #5945 ## Expected Behaviour Text sets and page layout chunks should only be loaded the first time they're being opened. ## Steps to Reproduce 1. Open Network tab in dev tools 1. Open the editor 2. Notice that all text sets and all templates are being loaded ## Screenshots <!-- If applicable, please add screenshots to help explain your problem. Bonus points for videos! --> ## Additional Context <!-- Please complete the following information. --> - Plugin Version: 1.3.0 - WordPress Version: 5.6 - Operating System: macOS Catalina - Browser: Chrome --- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance Criteria <!-- One or more bullet points for acceptance criteria. --> ## Implementation Brief <!-- One or more bullet points for how to technically implement the feature. -->
non_process
library avoid rendering inactive tabs bug description when opening the editor we already load all text sets and templates page layouts despite those tabs not being open by default that s of data as per same goes for inactive media tabs there s not much detail there but seems like a good performance win this also causes issues with tests see expected behaviour text sets and page layout chunks should only be loaded the first time they re being opened steps to reproduce open network tab in dev tools open the editor notice that all text sets and all templates are being loaded screenshots additional context plugin version wordpress version operating system macos catalina browser chrome do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria implementation brief
0
2,207
3,060,811,416
IssuesEvent
2015-08-14 23:33:09
geneontology/amigo
https://api.github.com/repos/geneontology/amigo
closed
Fix doc pipeline/generation so there is not so much churn in day-to-day operations
bug (B: affects usability)
Documentation should only exist/be generated in the gh-pages branch. Elsewhere, it should be ignored to prevent the endlessly long and confusing commits that arise from accidentally committing them with other things. This affects usability, but only for developers. This problem will be compounded as more people work on the code. Any solution here should also be applied to berkeleybop/bbop .
True
Fix doc pipeline/generation so there is not so much churn in day-to-day operations - Documentation should only exist/be generated in the gh-pages branch. Elsewhere, it should be ignored to prevent the endlessly long and confusing commits that arise from accidentally committing them with other things. This affects usability, but only for developers. This problem will be compounded as more people work on the code. Any solution here should also be applied to berkeleybop/bbop .
non_process
fix doc pipeline generation so there is not so much churn in day to day operations documentation should only exist be generated in the gh pages branch elsewhere it should be ignored to prevent the endlessly long and confusing commits that arise from accidentally committing them with other things this affects usability but only for developers this problem will be compounded as more people work on the code any solution here should also be applied to berkeleybop bbop
0
681,434
23,311,062,566
IssuesEvent
2022-08-08 08:19:07
prgrms-web-devcourse/Team-Books-CheckMoi-FE
https://api.github.com/repos/prgrms-web-devcourse/Team-Books-CheckMoi-FE
closed
Feat/μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€, κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€ Tab value 전달
ν”„λ‘ νŠΈ Type: κΈ°λŠ₯μΆ”κ°€ Priority: 쀑간
## κΈ°λŠ₯ μš”μ²­ ### πŸ“Œ μ„€λͺ… <!-- λ¬Έμ œμ— λŒ€ν•œ κ°„κ²°ν•˜κ³  λΆ„λͺ…ν•œ μ„€λͺ… --> μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€μ„ 클릭할 λ•Œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 이동 ### 🎨 κ΅¬ν˜„ν•  λ‚΄μš© <!-- κ΅¬ν˜„ 사항을 ꡬ체적으둜 μ μ–΄μ£Όμ„Έμš” --> ν˜„μž¬ μ–΄λ–€ 탭을 보고 μžˆλŠ”μ§€ μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 전달해야 ν•˜κ³  κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€μ—μ„œ λ‹€μ‹œ 탭을 λˆ„λ₯Ό 경우 μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€λ‘œ μ–΄λ–€ 탭을 보고 싢은지 전달해야 ν•œλ‹€. router.pushμ—μ„œ queryλ₯Ό μ‚¬μš©ν•΄μ„œ value κ°’μœΌλ‘œ μ²˜λ¦¬ν•  μ˜ˆμ • ### μ˜ˆμƒ κ΅¬ν˜„ μ‹œκ°„ 1 day ### μ‹œκΈ‰ν•œ 정도 <!-- 🐒 천천히, πŸƒπŸ» 보톡, 🚨 κΈ΄κΈ‰ --> πŸƒπŸ» 보톡
1.0
Feat/μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€, κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€ Tab value 전달 - ## κΈ°λŠ₯ μš”μ²­ ### πŸ“Œ μ„€λͺ… <!-- λ¬Έμ œμ— λŒ€ν•œ κ°„κ²°ν•˜κ³  λΆ„λͺ…ν•œ μ„€λͺ… --> μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€μ„ 클릭할 λ•Œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 이동 ### 🎨 κ΅¬ν˜„ν•  λ‚΄μš© <!-- κ΅¬ν˜„ 사항을 ꡬ체적으둜 μ μ–΄μ£Όμ„Έμš” --> ν˜„μž¬ μ–΄λ–€ 탭을 보고 μžˆλŠ”μ§€ μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 전달해야 ν•˜κ³  κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€μ—μ„œ λ‹€μ‹œ 탭을 λˆ„λ₯Ό 경우 μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€λ‘œ μ–΄λ–€ 탭을 보고 싢은지 전달해야 ν•œλ‹€. router.pushμ—μ„œ queryλ₯Ό μ‚¬μš©ν•΄μ„œ value κ°’μœΌλ‘œ μ²˜λ¦¬ν•  μ˜ˆμ • ### μ˜ˆμƒ κ΅¬ν˜„ μ‹œκ°„ 1 day ### μ‹œκΈ‰ν•œ 정도 <!-- 🐒 천천히, πŸƒπŸ» 보톡, 🚨 κΈ΄κΈ‰ --> πŸƒπŸ» 보톡
non_process
feat μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€ κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€ tab value 전달 κΈ°λŠ₯ μš”μ²­ πŸ“Œ μ„€λͺ… μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€μ„ 클릭할 λ•Œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 이동 🎨 κ΅¬ν˜„ν•  λ‚΄μš© ν˜„μž¬ μ–΄λ–€ 탭을 보고 μžˆλŠ”μ§€ μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€μ—μ„œ κ²Œμ‹œκΈ€ μƒμ„ΈνŽ˜μ΄μ§€λ‘œ 전달해야 ν•˜κ³  κ²Œμ‹œκΈ€ 상세 νŽ˜μ΄μ§€μ—μ„œ λ‹€μ‹œ 탭을 λˆ„λ₯Ό 경우 μŠ€ν„°λ”” 상세 νŽ˜μ΄μ§€λ‘œ μ–΄λ–€ 탭을 보고 싢은지 전달해야 ν•œλ‹€ router pushμ—μ„œ queryλ₯Ό μ‚¬μš©ν•΄μ„œ value κ°’μœΌλ‘œ μ²˜λ¦¬ν•  μ˜ˆμ • μ˜ˆμƒ κ΅¬ν˜„ μ‹œκ°„ day μ‹œκΈ‰ν•œ 정도 πŸƒπŸ» 보톡
0
86,451
15,755,661,256
IssuesEvent
2021-03-31 02:10:25
attesch/PrestaShop
https://api.github.com/repos/attesch/PrestaShop
opened
CVE-2020-8244 (Medium) detected in bl-1.2.1.tgz, bl-0.9.5.tgz
security vulnerability
## CVE-2020-8244 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bl-1.2.1.tgz</b>, <b>bl-0.9.5.tgz</b></p></summary> <p> <details><summary><b>bl-1.2.1.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-1.2.1.tgz">https://registry.npmjs.org/bl/-/bl-1.2.1.tgz</a></p> <p> Dependency Hierarchy: - selenium-standalone-6.14.0.tgz (Root Library) - tar-stream-1.5.2.tgz - :x: **bl-1.2.1.tgz** (Vulnerable Library) </details> <details><summary><b>bl-0.9.5.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.5.tgz">https://registry.npmjs.org/bl/-/bl-0.9.5.tgz</a></p> <p> Dependency Hierarchy: - webdriverio-3.4.0.tgz (Root Library) - archiver-0.14.4.tgz - tar-stream-1.1.5.tgz - :x: **bl-0.9.5.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls. <p>Publish Date: 2020-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-8244 (Medium) detected in bl-1.2.1.tgz, bl-0.9.5.tgz - ## CVE-2020-8244 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bl-1.2.1.tgz</b>, <b>bl-0.9.5.tgz</b></p></summary> <p> <details><summary><b>bl-1.2.1.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-1.2.1.tgz">https://registry.npmjs.org/bl/-/bl-1.2.1.tgz</a></p> <p> Dependency Hierarchy: - selenium-standalone-6.14.0.tgz (Root Library) - tar-stream-1.5.2.tgz - :x: **bl-1.2.1.tgz** (Vulnerable Library) </details> <details><summary><b>bl-0.9.5.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.5.tgz">https://registry.npmjs.org/bl/-/bl-0.9.5.tgz</a></p> <p> Dependency Hierarchy: - webdriverio-3.4.0.tgz (Root Library) - archiver-0.14.4.tgz - tar-stream-1.1.5.tgz - :x: **bl-0.9.5.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls. <p>Publish Date: 2020-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in bl tgz bl tgz cve medium severity vulnerability vulnerable libraries bl tgz bl tgz bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href dependency hierarchy selenium standalone tgz root library tar stream tgz x bl tgz vulnerable library bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href dependency hierarchy webdriverio tgz root library archiver tgz tar stream tgz x bl tgz vulnerable library vulnerability details a buffer over read vulnerability exists in bl and which could allow an attacker to supply user input even typed that if it ends up in consume argument and can become negative the bufferlist state can be corrupted tricking it into exposing uninitialized memory via regular slice calls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
15,969
20,187,604,147
IssuesEvent
2022-02-11 00:28:25
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
reopened
ARM Unsupported Relocations
Feature: Loader/ELF Feature: Processor/ARM
**Is your feature request related to a problem? Please describe.** The arm/thumb relocations R_ARM_THM_JUMP11, R_ARM_THM_MOVT_ABS, R_ARM_THM_MOVW_ABS_NC, R_ARM_TARGET2 and R_ARM_TARGET1 are currently unsupported. **Describe the solution you'd like** These relocations should be supported.
1.0
ARM Unsupported Relocations - **Is your feature request related to a problem? Please describe.** The arm/thumb relocations R_ARM_THM_JUMP11, R_ARM_THM_MOVT_ABS, R_ARM_THM_MOVW_ABS_NC, R_ARM_TARGET2 and R_ARM_TARGET1 are currently unsupported. **Describe the solution you'd like** These relocations should be supported.
process
arm unsupported relocations is your feature request related to a problem please describe the arm thumb relocations r arm thm r arm thm movt abs r arm thm movw abs nc r arm and r arm are currently unsupported describe the solution you d like these relocations should be supported
1
672,448
22,826,563,377
IssuesEvent
2022-07-12 09:06:10
insightsengineering/teal.modules.clinical
https://api.github.com/repos/insightsengineering/teal.modules.clinical
closed
Slight refactoring to adopt upstream changes in `tern.mmrm`
enhancement sme priority
Background: We are heavily refactoring `tern.mmrm` based on the new package `mmrm`. Therefore we need slight adaptations here. To do: See https://github.com/insightsengineering/teal.modules.clinical/blob/6893643784af93cf87874f5e6c0ba93ea6f8098b/R/tm_a_mmrm.R#L666 - [x] Change available correlation structures to currently only "Unstructured" (to be extended in July) - [x] Change available optimization methods to what is available in `mmrm`
1.0
Slight refactoring to adopt upstream changes in `tern.mmrm` - Background: We are heavily refactoring `tern.mmrm` based on the new package `mmrm`. Therefore we need slight adaptations here. To do: See https://github.com/insightsengineering/teal.modules.clinical/blob/6893643784af93cf87874f5e6c0ba93ea6f8098b/R/tm_a_mmrm.R#L666 - [x] Change available correlation structures to currently only "Unstructured" (to be extended in July) - [x] Change available optimization methods to what is available in `mmrm`
non_process
slight refactoring to adopt upstream changes in tern mmrm background we are heavily refactoring tern mmrm based on the new package mmrm therefore we need slight adaptations here to do see change available correlation structures to currently only unstructured to be extended in july change available optimization methods to what is available in mmrm
0
232,497
17,784,186,629
IssuesEvent
2021-08-31 09:04:23
longturn/freeciv21
https://api.github.com/repos/longturn/freeciv21
closed
Update FAQ following #588
documentation
**What should be documented? Is there something wrong in the documentation?** The FAQ says: > The Windows Server currently has a defect that does not allow it to run automatically from the client. See Issue #341 as well as work in progress pull request #462. The current work around is to manually start the server and a game and then connect to it via the client. > > This defect is not present in the Linux builds. This is no longer accurate. **Do you have suggestions?** Delete the offending text.
1.0
Update FAQ following #588 - **What should be documented? Is there something wrong in the documentation?** The FAQ says: > The Windows Server currently has a defect that does not allow it to run automatically from the client. See Issue #341 as well as work in progress pull request #462. The current work around is to manually start the server and a game and then connect to it via the client. > > This defect is not present in the Linux builds. This is no longer accurate. **Do you have suggestions?** Delete the offending text.
non_process
update faq following what should be documented is there something wrong in the documentation the faq says the windows server currently has a defect that does not allow it to run automatically from the client see issue as well as work in progress pull request the current work around is to manually start the server and a game and then connect to it via the client this defect is not present in the linux builds this is no longer accurate do you have suggestions delete the offending text
0
195,809
14,767,560,046
IssuesEvent
2021-01-10 07:30:00
LetMeR00t/TA-thehive-cortex
https://api.github.com/repos/LetMeR00t/TA-thehive-cortex
closed
Python script errors throwing 'External search command 'thehivecases' returned error code 1. '
fix-provided-wait-for-test
Hi @LetMeR00t i have the same issue and here are the python errors. 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': Traceback (most recent call last): 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': File "/opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py", line 43, in 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': configuration = Settings("spl, logger") 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': File "/opt/splunk/etc/apps/TA-thehive-cortex/bin/common.py", line 13, in init 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': for i in client.inputs: 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': AttributeError: 'str' object has no attribute 'inputs' 01-07-2021 16:05:57.136 ERROR script - xxxxxxxxxxxxxxxxxxxxx__search6_1610035556.143 External search command 'thehivecases' returned error code 1. . what do you think the issue might be?
1.0
Python script errors throwing 'External search command 'thehivecases' returned error code 1. ' - Hi @LetMeR00t i have the same issue and here are the python errors. 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': Traceback (most recent call last): 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': File "/opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py", line 43, in 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': configuration = Settings("spl, logger") 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': File "/opt/splunk/etc/apps/TA-thehive-cortex/bin/common.py", line 13, in init 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': for i in client.inputs: 01-07-2021 16:05:57.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-thehive-cortex/bin/thehive_search_cases.py': AttributeError: 'str' object has no attribute 'inputs' 01-07-2021 16:05:57.136 ERROR script - xxxxxxxxxxxxxxxxxxxxx__search6_1610035556.143 External search command 'thehivecases' returned error code 1. . what do you think the issue might be?
non_process
python script errors throwing external search command thehivecases returned error code hi i have the same issue and here are the python errors error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py traceback most recent call last error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py file opt splunk etc apps ta thehive cortex bin thehive search cases py line in error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py configuration settings spl logger error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py file opt splunk etc apps ta thehive cortex bin common py line in init error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py for i in client inputs error scriptrunner stderr from opt splunk bin opt splunk etc apps ta thehive cortex bin thehive search cases py attributeerror str object has no attribute inputs error script xxxxxxxxxxxxxxxxxxxxx external search command thehivecases returned error code what do you think the issue might be
0
397,279
11,726,208,845
IssuesEvent
2020-03-10 14:11:11
vvMv/rpgplus
https://api.github.com/repos/vvMv/rpgplus
closed
Farming Cocoa Beans and Nether Warts
Priority: Low Status: Completed Type: Bug
**Description of the bug** Harvesting Cocoa or Nether Warts does not grant experience. Nether Warts is not in the config and probably should be. With the Cocoa you have cocoa_beans in the config and not just cocoa. You don't farm the seeds, you farm the cocoa block. Similar to how you farm wheat and not the wheat seeds. **To reproduce** Break a cocoa pod or nether warts, no experience is gained. **Expected behavior** As both of these are farmed items, both should give experience.
1.0
Farming Cocoa Beans and Nether Warts - **Description of the bug** Harvesting Cocoa or Nether Warts does not grant experience. Nether Warts is not in the config and probably should be. With the Cocoa you have cocoa_beans in the config and not just cocoa. You don't farm the seeds, you farm the cocoa block. Similar to how you farm wheat and not the wheat seeds. **To reproduce** Break a cocoa pod or nether warts, no experience is gained. **Expected behavior** As both of these are farmed items, both should give experience.
non_process
farming cocoa beans and nether warts description of the bug harvesting cocoa or nether warts does not grant experience nether warts is not in the config and probably should be with the cocoa you have cocoa beans in the config and not just cocoa you don t farm the seeds you farm the cocoa block similar to how you farm wheat and not the wheat seeds to reproduce break a cocoa pod or nether warts no experience is gained expected behavior as both of these are farmed items both should give experience
0
16,738
21,900,205,147
IssuesEvent
2022-05-20 12:43:34
vezel-dev/cathode
https://api.github.com/repos/vezel-dev/cathode
opened
Kill entire `ChildProcess` tree on cancellation by default (with opt-out on `ChildProcessBuilder`)
type: feature state: approved area: processes
https://github.com/vezel-dev/cathode/blob/3464a162077b3c1313fd86a3c8c9f73bae2c87a4/src/core/Processes/ChildProcess.cs#L128-L141
1.0
Kill entire `ChildProcess` tree on cancellation by default (with opt-out on `ChildProcessBuilder`) - https://github.com/vezel-dev/cathode/blob/3464a162077b3c1313fd86a3c8c9f73bae2c87a4/src/core/Processes/ChildProcess.cs#L128-L141
process
kill entire childprocess tree on cancellation by default with opt out on childprocessbuilder
1
7,923
11,098,895,428
IssuesEvent
2019-12-16 16:00:38
GoogleCloudPlatform/java-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
opened
Fix ignored test: DeploymentTests.rolloutStartSetGetTargetCommitTests()
type: process
## In which file did you encounter the issue? https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/gameservices/v1alpha/src/test/java/com/google/cloud/gameservices/samples/DeploymentTests.java#L112 ### Did you change the file? If so, how? No ## Describe the issue The test is ignored.
1.0
Fix ignored test: DeploymentTests.rolloutStartSetGetTargetCommitTests() - ## In which file did you encounter the issue? https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/gameservices/v1alpha/src/test/java/com/google/cloud/gameservices/samples/DeploymentTests.java#L112 ### Did you change the file? If so, how? No ## Describe the issue The test is ignored.
process
fix ignored test deploymenttests rolloutstartsetgettargetcommittests in which file did you encounter the issue did you change the file if so how no describe the issue the test is ignored
1
46,332
9,924,417,687
IssuesEvent
2019-07-01 09:38:51
astrolabsoftware/fink-broker
https://api.github.com/repos/astrolabsoftware/fink-broker
closed
PEP8 complaints
code quality
**Describe the issue** My editor is complaining about PEP8 rules. Need to be fixed!
1.0
PEP8 complaints - **Describe the issue** My editor is complaining about PEP8 rules. Need to be fixed!
non_process
complaints describe the issue my editor is complaining about rules need to be fixed
0
17,450
10,052,802,742
IssuesEvent
2019-07-21 11:33:10
tyhal/tyhal.com
https://api.github.com/repos/tyhal/tyhal.com
closed
CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz
security vulnerability
## CVE-2019-10746 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary> <p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p> <p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p> <p>Path to dependency file: /tyhal.com/package.json</p> <p>Path to vulnerable library: /tmp/git/tyhal.com/node_modules/mixin-deep/package.json</p> <p> Dependency Hierarchy: - node-sass-chokidar-1.3.5.tgz (Root Library) - chokidar-2.0.4.tgz - braces-2.3.2.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tyhal/tyhal.com/commit/294ccdde56fee06e542a5cf23a87355e8341b56f">294ccdde56fee06e542a5cf23a87355e8341b56f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> mixin-deep before 1.3.2 is vulnerable to Prototype Pollution. <p>Publish Date: 2019-07-11 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10746>CVE-2019-10746</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p> <p>Release Date: 2019-07-11</p> <p>Fix Resolution: 1.3.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz - ## CVE-2019-10746 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary> <p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p> <p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p> <p>Path to dependency file: /tyhal.com/package.json</p> <p>Path to vulnerable library: /tmp/git/tyhal.com/node_modules/mixin-deep/package.json</p> <p> Dependency Hierarchy: - node-sass-chokidar-1.3.5.tgz (Root Library) - chokidar-2.0.4.tgz - braces-2.3.2.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tyhal/tyhal.com/commit/294ccdde56fee06e542a5cf23a87355e8341b56f">294ccdde56fee06e542a5cf23a87355e8341b56f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> mixin-deep before 1.3.2 is vulnerable to Prototype Pollution. <p>Publish Date: 2019-07-11 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10746>CVE-2019-10746</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p> <p>Release Date: 2019-07-11</p> <p>Fix Resolution: 1.3.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in mixin deep tgz cve high severity vulnerability vulnerable library mixin deep tgz deeply mix the properties of objects into the first object like merge deep but doesn t clone library home page a href path to dependency file tyhal com package json path to vulnerable library tmp git tyhal com node modules mixin deep package json dependency hierarchy node sass chokidar tgz root library chokidar tgz braces tgz snapdragon tgz base tgz x mixin deep tgz vulnerable library found in head commit a href vulnerability details mixin deep before is vulnerable to prototype pollution publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
20,550
27,210,451,889
IssuesEvent
2023-02-20 16:08:02
cse442-at-ub/project_s23-iweatherify
https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify
closed
Creating documentation for potential frontend frameworks with pros and cons of each framework
Processing Task Sprint 1
Task Tests 1. **https://docs.google.com/document/d/1VPASw7b6GKhC09t9R5XjnwVZZTecglqjZ93R-RkcHf8/edit** refer to this
1.0
Creating documentation for potential frontend frameworks with pros and cons of each framework - Task Tests 1. **https://docs.google.com/document/d/1VPASw7b6GKhC09t9R5XjnwVZZTecglqjZ93R-RkcHf8/edit** refer to this
process
creating documentation for potential frontend frameworks with pros and cons of each framework task tests refer to this
1
447,436
31,710,717,325
IssuesEvent
2023-09-09 08:22:07
samuelsongg/INF2001_P6-4
https://api.github.com/repos/samuelsongg/INF2001_P6-4
opened
#1.1.4 Requirements Finalization
documentation
**Requirements Review:** Review the gathered requirements to ensure they are clear, complete, and feasible. Identify any potential gaps or ambiguities that need clarification with the client to confirm that they accurately represent their needs and expectations. **Formal Client Acceptance:** Before finalizing the requirements, obtain formal acceptance from the client via follow-up emails. **Success Criteria:** Obtain formal client acceptance via follow-up emails **Start Date:** 7 Sep 2023 **Allocated Time:** Before deadline **Deadline:** Before 2359, 17 Sep 2023
1.0
#1.1.4 Requirements Finalization - **Requirements Review:** Review the gathered requirements to ensure they are clear, complete, and feasible. Identify any potential gaps or ambiguities that need clarification with the client to confirm that they accurately represent their needs and expectations. **Formal Client Acceptance:** Before finalizing the requirements, obtain formal acceptance from the client via follow-up emails. **Success Criteria:** Obtain formal client acceptance via follow-up emails **Start Date:** 7 Sep 2023 **Allocated Time:** Before deadline **Deadline:** Before 2359, 17 Sep 2023
non_process
requirements finalization requirements review review the gathered requirements to ensure they are clear complete and feasible identify any potential gaps or ambiguities that need clarification with the client to confirm that they accurately represent their needs and expectations formal client acceptance before finalizing the requirements obtain formal acceptance from the client via follow up emails success criteria obtain formal client acceptance via follow up emails start date sep allocated time before deadline deadline before sep
0
349,970
31,844,614,957
IssuesEvent
2023-09-14 18:54:09
Azure/azure-sdk-for-java
https://api.github.com/repos/Azure/azure-sdk-for-java
closed
Monitor Ingestion Samples Issue
Monitor Client test-manual-pass
1. **Section** [link](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-ingestion/src/samples/java/com/azure/monitor/ingestion/ReadmeSamples.java#L122): ![image](https://github.com/Azure/azure-sdk-for-java/assets/111940661/4a57bbb2-8e85-4e75-b1b1-ba166215dbce) **Reason**: java.lang.IllegalStateException: endpoint is required to build the client. **Suggestion**: Update code to ``` LogsIngestionClient logsIngestionClient = new LogsIngestionClientBuilder() .endpoint("<data-collection-endpoint>") .credential(credential) .httpLogOptions(new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BODY_AND_HEADERS)) .buildClient(); ``` @sandeep-sen, @josefree, @scottaddie and @srnagar for notification.
1.0
Monitor Ingestion Samples Issue - 1. **Section** [link](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-ingestion/src/samples/java/com/azure/monitor/ingestion/ReadmeSamples.java#L122): ![image](https://github.com/Azure/azure-sdk-for-java/assets/111940661/4a57bbb2-8e85-4e75-b1b1-ba166215dbce) **Reason**: java.lang.IllegalStateException: endpoint is required to build the client. **Suggestion**: Update code to ``` LogsIngestionClient logsIngestionClient = new LogsIngestionClientBuilder() .endpoint("<data-collection-endpoint>") .credential(credential) .httpLogOptions(new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BODY_AND_HEADERS)) .buildClient(); ``` @sandeep-sen, @josefree, @scottaddie and @srnagar for notification.
non_process
monitor ingestion samples issue section reason java lang illegalstateexception endpoint is required to build the client suggestion update code to logsingestionclient logsingestionclient new logsingestionclientbuilder endpoint credential credential httplogoptions new httplogoptions setloglevel httplogdetaillevel body and headers buildclient sandeep sen josefree scottaddie and srnagar for notification
0
123,592
26,280,913,828
IssuesEvent
2023-01-07 09:33:18
PrjAdv/prjuptime
https://api.github.com/repos/PrjAdv/prjuptime
closed
πŸ›‘ ECODELCINEMA is down
status ecodelcinema
In [`7974d99`](https://github.com/PrjAdv/prjuptime/commit/7974d991d79bbd9536c401dc9894efb73fce9353 ), ECODELCINEMA (https://ecodelcinema.com) was **down**: - HTTP code: 200 - Response time: 564 ms
1.0
πŸ›‘ ECODELCINEMA is down - In [`7974d99`](https://github.com/PrjAdv/prjuptime/commit/7974d991d79bbd9536c401dc9894efb73fce9353 ), ECODELCINEMA (https://ecodelcinema.com) was **down**: - HTTP code: 200 - Response time: 564 ms
non_process
πŸ›‘ ecodelcinema is down in ecodelcinema was down http code response time ms
0
18,919
24,867,013,954
IssuesEvent
2022-10-27 12:44:23
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
TeX default language is US English, LaTeXML has no default
enhancement postprocessing
Directly inspired by https://github.com/brucemiller/LaTeXML/issues/1440#issuecomment-897874048 (cc @nxg). The TeX default language is deliberately US English. I think it makes sense for LaTeXML to set the language to US English by default, so that the output starts with the appropriate `<html lang="en-US">` even if the user does not import babel. PS: I have tried sticking `MergeFont(language => "en-US")` in `TeX.pool.ltxml` and it seems to work correctly in like 90% of the situations. In some of the tests (especially the plain TeX ones), the language does not bubble up to `<ltx:document>`, and weird stuff happens.
1.0
TeX default language is US English, LaTeXML has no default - Directly inspired by https://github.com/brucemiller/LaTeXML/issues/1440#issuecomment-897874048 (cc @nxg). The TeX default language is deliberately US English. I think it makes sense for LaTeXML to set the language to US English by default, so that the output starts with the appropriate `<html lang="en-US">` even if the user does not import babel. PS: I have tried sticking `MergeFont(language => "en-US")` in `TeX.pool.ltxml` and it seems to work correctly in like 90% of the situations. In some of the tests (especially the plain TeX ones), the language does not bubble up to `<ltx:document>`, and weird stuff happens.
process
tex default language is us english latexml has no default directly inspired by cc nxg the tex default language is deliberately us english i think it makes sense for latexml to set the language to us english by default so that the output starts with the appropriate even if the user does not import babel ps i have tried sticking mergefont language en us in tex pool ltxml and it seems to work correctly in like of the situations in some of the tests especially the plain tex ones the language does not bubble up to and weird stuff happens
1
113,763
11,813,655,498
IssuesEvent
2020-03-19 23:02:13
arcfide/Haven
https://api.github.com/repos/arcfide/Haven
closed
Unclear Function Examples
documentation
Query "Intersection R←X∩Y" returns the following examples: > 'ABRA'∩'CAR' ARA 1 'PLUS' 2 ∩ ⍳5 1 > 2 These examples make it difficult to distinguish which elements are the result of the function (presumably they are ARA and 1 2).
1.0
Unclear Function Examples - Query "Intersection R←X∩Y" returns the following examples: > 'ABRA'∩'CAR' ARA 1 'PLUS' 2 ∩ ⍳5 1 > 2 These examples make it difficult to distinguish which elements are the result of the function (presumably they are ARA and 1 2).
non_process
unclear function examples query intersection r←x∩y returns the following examples abra ∩ car ara plus ∩ ⍳ these examples make it difficult to distinguish which elements are the result of the function presumably they are ara and
0
180,911
6,654,382,904
IssuesEvent
2017-09-29 12:36:09
workcraft/workcraft
https://api.github.com/repos/workcraft/workcraft
opened
Scalable scripting interface
enhancement priority:high tag:script
- [ ] Enable plugin-specific JavaScript wrapper functions that are loaded on Workcraft initialisation - [ ] Create wrapper functions for popular commands (synthesis, verification, statistics, etc.) - [ ] Split core wrapper functions into categories (file, import/export, output, misc, etc.) - [ ] Update documentation and integration tests to use these wrapper functions
1.0
Scalable scripting interface - - [ ] Enable plugin-specific JavaScript wrapper functions that are loaded on Workcraft initialisation - [ ] Create wrapper functions for popular commands (synthesis, verification, statistics, etc.) - [ ] Split core wrapper functions into categories (file, import/export, output, misc, etc.) - [ ] Update documentation and integration tests to use these wrapper functions
non_process
scalable scripting interface enable plugin specific javascript wrapper functions that are loaded on workcraft initialisation create wrapper functions for popular commands synthesis verification statistics etc split core wrapper functions into categories file import export output misc etc update documentation and integration tests to use these wrapper functions
0
7,208
5,919,096,630
IssuesEvent
2017-05-22 16:50:34
bocoup/opendesignkit
https://api.github.com/repos/bocoup/opendesignkit
opened
Webperf Mini-Audit
Site Performance
Figure we run a couple tests, identify any pain points, make some notes on potential improvements, etc. in a comment below, then spin that off into individual issues and close this out.
True
Webperf Mini-Audit - Figure we run a couple tests, identify any pain points, make some notes on potential improvements, etc. in a comment below, then spin that off into individual issues and close this out.
non_process
webperf mini audit figure we run a couple tests identify any pain points make some notes on potential improvements etc in a comment below then spin that off into individual issues and close this out
0
793,243
27,987,817,457
IssuesEvent
2023-03-26 21:58:16
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
opened
AI_rational Clipping with Male Pygmies
priority low :grey_exclamation: bug :bug: 3D graphics :triangular_ruler:
<!-- **DO NOT REMOVE PRE-EXISTING LINES** ------------------------------------------------------------------------------------------------------------ --> **Your mod version is:** 0.1.0 **What expansions do you have installed?** All up to 1.5.0.1 **Are you using any submods/mods? If so, which?** No **Please explain your issue in as much detail as possible:** Pigmies clip in `AI_rational` animation due to their huge chin. The distance between their fingers and chin should be increased in `male_body_AI_rationalFatDwarf_2.anim` and `male_body_AI_rationalFatDwarf_1.anim`. **Steps to reproduce the issue:** **Upload an attachment below: .zip of your save, or screenshots:** ![image](https://user-images.githubusercontent.com/46384992/227805797-72a7b23a-a2ba-4433-bb49-8b215025ecb3.png)
1.0
AI_rational Clipping with Male Pygmies - <!-- **DO NOT REMOVE PRE-EXISTING LINES** ------------------------------------------------------------------------------------------------------------ --> **Your mod version is:** 0.1.0 **What expansions do you have installed?** All up to 1.5.0.1 **Are you using any submods/mods? If so, which?** No **Please explain your issue in as much detail as possible:** Pigmies clip in `AI_rational` animation due to their huge chin. The distance between their fingers and chin should be increased in `male_body_AI_rationalFatDwarf_2.anim` and `male_body_AI_rationalFatDwarf_1.anim`. **Steps to reproduce the issue:** **Upload an attachment below: .zip of your save, or screenshots:** ![image](https://user-images.githubusercontent.com/46384992/227805797-72a7b23a-a2ba-4433-bb49-8b215025ecb3.png)
non_process
ai rational clipping with male pygmies do not remove pre existing lines your mod version is what expansions do you have installed all up to are you using any submods mods if so which no please explain your issue in as much detail as possible pigmies clip in ai rational animation due to their huge chin the distance between their fingers and chin should be increased in male body ai rationalfatdwarf anim and male body ai rationalfatdwarf anim steps to reproduce the issue upload an attachment below zip of your save or screenshots
0
8,436
11,597,877,920
IssuesEvent
2020-02-24 21:48:26
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
reopened
obsolete "in other organism" terms
multi-species process obsoletion
catabolism by organism of carbohydrate in other organism involved in symbiotic interaction catabolism by organism of cell wall cellulose in other organism involved in symbiotic interaction catabolism by organism of cell wall chitin in other organism involved in symbiotic interaction catabolism by organism of cell wall pectin in other organism involved in symbiotic interaction catabolism by organism of glucan in other organism involved in symbiotic interaction catabolism by organism of macromolecule in other organism involved in symbiotic interaction catabolism by organism of protein in other organism involved in symbiotic interaction catabolism by organism of xylan in other organism involved in symbiotic interaction catabolism of substance in other organism involved in symbiotic interaction disruption by organism of cell wall of other organism involved in symbiotic interaction disruption by organism of cellular component in other organism involved in symbiotic interaction Even if these processes exist for the symbiont and the host, does this grouping actually make sense? The processes aren't really related (other than x compound metabolism, which is fine)
1.0
obsolete "in other organism" terms - catabolism by organism of carbohydrate in other organism involved in symbiotic interaction catabolism by organism of cell wall cellulose in other organism involved in symbiotic interaction catabolism by organism of cell wall chitin in other organism involved in symbiotic interaction catabolism by organism of cell wall pectin in other organism involved in symbiotic interaction catabolism by organism of glucan in other organism involved in symbiotic interaction catabolism by organism of macromolecule in other organism involved in symbiotic interaction catabolism by organism of protein in other organism involved in symbiotic interaction catabolism by organism of xylan in other organism involved in symbiotic interaction catabolism of substance in other organism involved in symbiotic interaction disruption by organism of cell wall of other organism involved in symbiotic interaction disruption by organism of cellular component in other organism involved in symbiotic interaction Even if these processes exist for the symbiont and the host, does this grouping actually make sense? The processes aren't really related (other than x compound metabolism, which is fine)
process
obsolete in other organism terms catabolism by organism of carbohydrate in other organism involved in symbiotic interaction catabolism by organism of cell wall cellulose in other organism involved in symbiotic interaction catabolism by organism of cell wall chitin in other organism involved in symbiotic interaction catabolism by organism of cell wall pectin in other organism involved in symbiotic interaction catabolism by organism of glucan in other organism involved in symbiotic interaction catabolism by organism of macromolecule in other organism involved in symbiotic interaction catabolism by organism of protein in other organism involved in symbiotic interaction catabolism by organism of xylan in other organism involved in symbiotic interaction catabolism of substance in other organism involved in symbiotic interaction disruption by organism of cell wall of other organism involved in symbiotic interaction disruption by organism of cellular component in other organism involved in symbiotic interaction even if these processes exist for the symbiont and the host does this grouping actually make sense the processes aren t really related other than x compound metabolism which is fine
1
12,240
14,743,850,951
IssuesEvent
2021-01-07 14:30:20
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Toronto - Missing November Payments from SAB Portal
anc-process anp-1 ant-bug has attachment
In GitLab by @kdjstudios on Dec 5, 2019, 10:11 **Submitted by:** celeste.farquharson@answernet.com **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-12-05-85615/conversation **Server:** Internal **Client/Site:** Toronto **Account:** #062-SER6505 **Issue:** Please see below in respect to our client Fiba Canning, account #062-SER6505. Fiba Canning made attempts to pay their November invoice through the portal and they did not appear in SAB. We ran a Moneris report and saw the payments were made 4 times which still does not appear in Fiba Canning’s account. Please advise where this issue stems from and how we can have it remedied. NOTE: Here is the full Email Thread that was sent in: [original_message__1_.html](/uploads/78c7e33797d4d3b7a8cfcceb49bd4914/original_message__1_.html)
1.0
Toronto - Missing November Payments from SAB Portal - In GitLab by @kdjstudios on Dec 5, 2019, 10:11 **Submitted by:** celeste.farquharson@answernet.com **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-12-05-85615/conversation **Server:** Internal **Client/Site:** Toronto **Account:** #062-SER6505 **Issue:** Please see below in respect to our client Fiba Canning, account #062-SER6505. Fiba Canning made attempts to pay their November invoice through the portal and they did not appear in SAB. We ran a Moneris report and saw the payments were made 4 times which still does not appear in Fiba Canning’s account. Please advise where this issue stems from and how we can have it remedied. NOTE: Here is the full Email Thread that was sent in: [original_message__1_.html](/uploads/78c7e33797d4d3b7a8cfcceb49bd4914/original_message__1_.html)
process
toronto missing november payments from sab portal in gitlab by kdjstudios on dec submitted by celeste farquharson answernet com helpdesk server internal client site toronto account issue please see below in respect to our client fiba canning account fiba canning made attempts to pay their november invoice through the portal and they did not appear in sab we ran a moneris report and saw the payments were made times which still does not appear in fiba canning’s account please advise where this issue stems from and how we can have it remedied note here is the full email thread that was sent in uploads original message html
1
19,162
25,259,743,215
IssuesEvent
2022-11-15 21:32:35
benthosdev/benthos
https://api.github.com/repos/benthosdev/benthos
closed
Play with a WASM processor
enhancement processors effort: lower
We should do a bit of explorative work and see how easily a WASM processor using https://github.com/suborbital/reactr would be.
1.0
Play with a WASM processor - We should do a bit of explorative work and see how easily a WASM processor using https://github.com/suborbital/reactr would be.
process
play with a wasm processor we should do a bit of explorative work and see how easily a wasm processor using would be
1
453
2,894,235,971
IssuesEvent
2015-06-15 22:15:03
besasm/EMGAATS
https://api.github.com/repos/besasm/EMGAATS
closed
Persist property changes
#1 Priority data process UI
Dialogs work on business class, but do not save to the database. Partially depends on #122.
1.0
Persist property changes - Dialogs work on business class, but do not save to the database. Partially depends on #122.
process
persist property changes dialogs work on business class but do not save to the database partially depends on
1
244,165
7,871,273,726
IssuesEvent
2018-06-25 07:14:38
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
reopened
www.google.com - see bug description
browser-firefox priority-critical
<!-- @browser: Firefox 60.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.google.com/doodles/doodle-snow-games-day-8-lunar-new-year **Browser / Version**: Firefox 60.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: Doodle stutters while playing; uses 70-80% GPU on Intel 620. Works fine in Chrome. **Steps to Reproduce**: 1. Click the play button on the doodle. gfx.webrender.all: false gfx.webrender.blob-images: 1 gfx.webrender.enabled: false image.mem.shared: 2 _From [webcompat.com](https://webcompat.com/) with ❀️_
1.0
www.google.com - see bug description - <!-- @browser: Firefox 60.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.google.com/doodles/doodle-snow-games-day-8-lunar-new-year **Browser / Version**: Firefox 60.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: Doodle stutters while playing; uses 70-80% GPU on Intel 620. Works fine in Chrome. **Steps to Reproduce**: 1. Click the play button on the doodle. gfx.webrender.all: false gfx.webrender.blob-images: 1 gfx.webrender.enabled: false image.mem.shared: 2 _From [webcompat.com](https://webcompat.com/) with ❀️_
non_process
see bug description url browser version firefox operating system windows tested another browser yes problem type something else description doodle stutters while playing uses gpu on intel works fine in chrome steps to reproduce click the play button on the doodle gfx webrender all false gfx webrender blob images gfx webrender enabled false image mem shared from with ❀️
0
191,394
14,594,176,915
IssuesEvent
2020-12-20 03:51:18
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
qingqibing/etcd: clientv3/integration/txn_test.go; 23 LoC
fresh small test
Found a possible issue in [qingqibing/etcd](https://www.github.com/qingqibing/etcd) at [clientv3/integration/txn_test.go](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/integration/txn_test.go#L118-L140) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable i used in defer or goroutine at line 124 [Click here to see the code in its original context.](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/integration/txn_test.go#L118-L140) <details> <summary>Click here to show the 23 line(s) of Go which triggered the analyzer.</summary> ```go for i := range thenOps { clus.Members[0].Stop(t) <-clus.Members[0].StopNotify() donec := make(chan struct{}, 1) go func() { _, err := kv.Txn(context.TODO()).Then(thenOps[i]...).Commit() if err != nil { t.Errorf("expected response, got error %v", err) } donec <- struct{}{} }() // wait for txn to fail on disconnect time.Sleep(100 * time.Millisecond) // restart node; client should resume clus.Members[0].Restart(t) select { case <-donec: case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()): t.Fatalf("waited too long") } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 0526f461e1d35f13a85836674951cb12c6bee187
1.0
qingqibing/etcd: clientv3/integration/txn_test.go; 23 LoC - Found a possible issue in [qingqibing/etcd](https://www.github.com/qingqibing/etcd) at [clientv3/integration/txn_test.go](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/integration/txn_test.go#L118-L140) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable i used in defer or goroutine at line 124 [Click here to see the code in its original context.](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/integration/txn_test.go#L118-L140) <details> <summary>Click here to show the 23 line(s) of Go which triggered the analyzer.</summary> ```go for i := range thenOps { clus.Members[0].Stop(t) <-clus.Members[0].StopNotify() donec := make(chan struct{}, 1) go func() { _, err := kv.Txn(context.TODO()).Then(thenOps[i]...).Commit() if err != nil { t.Errorf("expected response, got error %v", err) } donec <- struct{}{} }() // wait for txn to fail on disconnect time.Sleep(100 * time.Millisecond) // restart node; client should resume clus.Members[0].Restart(t) select { case <-donec: case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()): t.Fatalf("waited too long") } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 0526f461e1d35f13a85836674951cb12c6bee187
non_process
qingqibing etcd integration txn test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable i used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for i range thenops clus members stop t clus members stopnotify donec make chan struct go func err kv txn context todo then thenops commit if err nil t errorf expected response got error v err donec struct wait for txn to fail on disconnect time sleep time millisecond restart node client should resume clus members restart t select case donec case time after clus members serverconfig reqtimeout t fatalf waited too long leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
18,343
24,466,121,746
IssuesEvent
2022-10-07 15:09:27
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Suggest 'ubuntu-latest' for the build agent image
devops/prod doc-bug Pri1 devops-cicd-process/tech
The pool.vmImage should be 'ubuntu-latest' as recommended by the pipeline when it runs. I also found it unclear why I needed vmImage and container to be ubuntu-18.04 and ubuntu:18.04 respectively. Most build systems just have you select the docker **image**. I think it should clarify that the pool specifies the agent vmImage to use to host the docker container, and container specifies the docker container and image to run the job in. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 3339a2e0-be29-1363-f588-b231d4472c02 * Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3 * Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops) * Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/container-phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Suggest 'ubuntu-latest' for the build agent image - The pool.vmImage should be 'ubuntu-latest' as recommended by the pipeline when it runs. I also found it unclear why I needed vmImage and container to be ubuntu-18.04 and ubuntu:18.04 respectively. Most build systems just have you select the docker **image**. I think it should clarify that the pool specifies the agent vmImage to use to host the docker container, and container specifies the docker container and image to run the job in. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 3339a2e0-be29-1363-f588-b231d4472c02 * Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3 * Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops) * Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/container-phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
suggest ubuntu latest for the build agent image the pool vmimage should be ubuntu latest as recommended by the pipeline when it runs i also found it unclear why i needed vmimage and container to be ubuntu and ubuntu respectively most build systems just have you select the docker image i think it should clarify that the pool specifies the agent vmimage to use to host the docker container and container specifies the docker container and image to run the job in document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
333,746
10,131,000,936
IssuesEvent
2019-08-01 18:23:13
arcticicestudio/nord-jetbrains
https://api.github.com/repos/arcticicestudio/nord-jetbrains
closed
GoLand 2019.2 display wrong color for go code
context-syntax priority-high scope-compatibility scope-ux status-accepted target-goland target-intellij-idea type-feature
I tried Nord plugin with GoLand 2019.2 but couldn't get north-bluish color like image demo at README.md I felt like the color still come from Dracula instead of Nort <img width="808" alt="Screen Shot 2019-07-31 at 16 09 42" src="https://user-images.githubusercontent.com/8962973/62199650-16d12f00-b3ae-11e9-83ec-fcc4a06e4dfb.png"> <img width="737" alt="Screen Shot 2019-07-31 at 16 12 27" src="https://user-images.githubusercontent.com/8962973/62199651-16d12f00-b3ae-11e9-94cc-577170b2d4d9.png">
1.0
GoLand 2019.2 display wrong color for go code - I tried Nord plugin with GoLand 2019.2 but couldn't get north-bluish color like image demo at README.md I felt like the color still come from Dracula instead of Nort <img width="808" alt="Screen Shot 2019-07-31 at 16 09 42" src="https://user-images.githubusercontent.com/8962973/62199650-16d12f00-b3ae-11e9-83ec-fcc4a06e4dfb.png"> <img width="737" alt="Screen Shot 2019-07-31 at 16 12 27" src="https://user-images.githubusercontent.com/8962973/62199651-16d12f00-b3ae-11e9-94cc-577170b2d4d9.png">
non_process
goland display wrong color for go code i tried nord plugin with goland but couldn t get north bluish color like image demo at readme md i felt like the color still come from dracula instead of nort img width alt screen shot at src img width alt screen shot at src
0
376,901
26,222,286,645
IssuesEvent
2023-01-04 15:43:48
tidyverse/tidyr
https://api.github.com/repos/tidyverse/tidyr
closed
Document which functions work with grouped data
documentation
On the `?fill` help page, there is no statement that `fill` knows to respect groups from `dplyr::group_by`. (The last example does demonstrate this.) Same on the `?expand` and `?complete` help pages, possibly other functions too. It would be great if there was a standard sentence or phrase that was part of the `Description` section for all non-`dplyr` functions that know what to do with a `grouped_df`. Perhaps "Respects tibble groupings". I think this would both serve as a good reference (I constructed a little example before posting thing to check on `expand`... I was pretty sure it respected groups but needed to check to see), and also for those new users who read help pages it might help cement the concept that `group_by` isn't magic that works with every function, only specially constructed functions.
1.0
Document which functions work with grouped data - On the `?fill` help page, there is no statement that `fill` knows to respect groups from `dplyr::group_by`. (The last example does demonstrate this.) Same on the `?expand` and `?complete` help pages, possibly other functions too. It would be great if there was a standard sentence or phrase that was part of the `Description` section for all non-`dplyr` functions that know what to do with a `grouped_df`. Perhaps "Respects tibble groupings". I think this would both serve as a good reference (I constructed a little example before posting thing to check on `expand`... I was pretty sure it respected groups but needed to check to see), and also for those new users who read help pages it might help cement the concept that `group_by` isn't magic that works with every function, only specially constructed functions.
non_process
document which functions work with grouped data on the fill help page there is no statement that fill knows to respect groups from dplyr group by the last example does demonstrate this same on the expand and complete help pages possibly other functions too it would be great if there was a standard sentence or phrase that was part of the description section for all non dplyr functions that know what to do with a grouped df perhaps respects tibble groupings i think this would both serve as a good reference i constructed a little example before posting thing to check on expand i was pretty sure it respected groups but needed to check to see and also for those new users who read help pages it might help cement the concept that group by isn t magic that works with every function only specially constructed functions
0
22,186
30,733,558,115
IssuesEvent
2023-07-28 05:19:22
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
How to support failover in k8sattributes
question Stale processor/k8sattributes closed as inactive
### Component(s) processor/k8sattributes ### Describe the issue you're reporting Our spans have an attribute called `host`, which might be pod's name or hostname, which is a little different in our system. In order to enrich spans with k8s metadata, we have to use one k8sattributes processor to enrich based on pod's name first and then another processor to enrich based on hostname. In the end, there are two k8sattributes processors and two copies of data. It's inefficient for memory. So can we support something like `if no pod found based on pod'name, then query pod based on pod's hostname`
1.0
How to support failover in k8sattributes - ### Component(s) processor/k8sattributes ### Describe the issue you're reporting Our spans have an attribute called `host`, which might be pod's name or hostname, which is a little different in our system. In order to enrich spans with k8s metadata, we have to use one k8sattributes processor to enrich based on pod's name first and then another processor to enrich based on hostname. In the end, there are two k8sattributes processors and two copies of data. It's inefficient for memory. So can we support something like `if no pod found based on pod'name, then query pod based on pod's hostname`
process
how to support failover in component s processor describe the issue you re reporting our spans have an attribute called host which might be pod s name or hostname which is a little different in our system in order to enrich spans with metadata we have to use one processor to enrich based on pod s name first and then another processor to enrich based on hostname in the end there are two processors and two copies of data it s inefficient for memory so can we support something like if no pod found based on pod name then query pod based on pod s hostname
1
6,779
9,915,341,189
IssuesEvent
2019-06-28 16:35:44
spring-projects/spring-hateoas
https://api.github.com/repos/spring-projects/spring-hateoas
closed
RepresentationModelAssemblerSupport does not expose controllerClass/resourceType to subclasses
process: in progress
I would like to add additional links in my toResource method of a subclass of ResourceAssemblerSupport, but the controllerClass needed to construct the link is not visible. It's easy to workaround by adding a controllerClass field in the subclass as well, but it I think it would be nicer if there were protected getControllerClass()/getResourceType() methods for subclasses to use.
1.0
RepresentationModelAssemblerSupport does not expose controllerClass/resourceType to subclasses - I would like to add additional links in my toResource method of a subclass of ResourceAssemblerSupport, but the controllerClass needed to construct the link is not visible. It's easy to workaround by adding a controllerClass field in the subclass as well, but it I think it would be nicer if there were protected getControllerClass()/getResourceType() methods for subclasses to use.
process
representationmodelassemblersupport does not expose controllerclass resourcetype to subclasses i would like to add additional links in my toresource method of a subclass of resourceassemblersupport but the controllerclass needed to construct the link is not visible it s easy to workaround by adding a controllerclass field in the subclass as well but it i think it would be nicer if there were protected getcontrollerclass getresourcetype methods for subclasses to use
1
6,094
8,951,745,250
IssuesEvent
2019-01-25 14:48:17
enKryptIO/ethvm
https://api.github.com/repos/enKryptIO/ethvm
closed
Premine balances are not generating balances within Mongo
bug milestone:1 priority:high project:processing
When testing I noticed that Premine balances from the genesis block were not resulting in corresponding entries within the balances collection.
1.0
Premine balances are not generating balances within Mongo - When testing I noticed that Premine balances from the genesis block were not resulting in corresponding entries within the balances collection.
process
premine balances are not generating balances within mongo when testing i noticed that premine balances from the genesis block were not resulting in corresponding entries within the balances collection
1
7,679
10,762,312,266
IssuesEvent
2019-10-31 23:13:28
ArctosDB/new-collections
https://api.github.com/repos/ArctosDB/new-collections
reopened
JSNM - Draft MOU
MOU draft in process
Work with new collection to complete Draft MOU, answer any questions about migration, Arctos operating procedures, and costs; (download sample template include collection contact info). Place draft in Dropbox, share with new collection contact, AWG Chair, AWG Vice Chair, and any collection Mentors Send email to New collection Contact and copy the above.
1.0
JSNM - Draft MOU - Work with new collection to complete Draft MOU, answer any questions about migration, Arctos operating procedures, and costs; (download sample template include collection contact info). Place draft in Dropbox, share with new collection contact, AWG Chair, AWG Vice Chair, and any collection Mentors Send email to New collection Contact and copy the above.
process
jsnm draft mou work with new collection to complete draft mou answer any questions about migration arctos operating procedures and costs download sample template include collection contact info place draft in dropbox share with new collection contact awg chair awg vice chair and any collection mentors send email to new collection contact and copy the above
1
435,003
30,481,306,866
IssuesEvent
2023-07-17 20:35:37
ethanbaker/cpick
https://api.github.com/repos/ethanbaker/cpick
opened
Update README
documentation
Update the README to a more modern and fleshed out version. Also address issue #2
1.0
Update README - Update the README to a more modern and fleshed out version. Also address issue #2
non_process
update readme update the readme to a more modern and fleshed out version also address issue
0
597,517
18,165,627,108
IssuesEvent
2021-09-27 14:21:04
carbon-design-system/carbon-addons-iot-react
https://api.github.com/repos/carbon-design-system/carbon-addons-iot-react
closed
[Table] Add support for moving columns to TableModel
type: enhancement :bulb: status: needs triage :mag: status: needs priority :inbox_tray: package: angular
### What package is this for? - [ ] React - [x] Angular ### Summary Needs to do what the inline documentation for the `moveColumn` function says.
1.0
[Table] Add support for moving columns to TableModel - ### What package is this for? - [ ] React - [x] Angular ### Summary Needs to do what the inline documentation for the `moveColumn` function says.
non_process
add support for moving columns to tablemodel what package is this for react angular summary needs to do what the inline documentation for the movecolumn function says
0
9,567
12,519,615,152
IssuesEvent
2020-06-03 14:41:36
code4romania/expert-consultation-client
https://api.github.com/repos/code4romania/expert-consultation-client
closed
Add comments to document breakdown unit
angular document processing documents enhancement
As a user of the legal consultation platform, I want to be able to add comments to the document breakdown units I have been assigned to. Only one level of replies will be available, replies to the original comment. ![RU - lege - integral - cu comentarii - open](https://user-images.githubusercontent.com/15039873/58745016-8f902b00-83ff-11e9-9e49-5e6d9f71da47.png)
1.0
Add comments to document breakdown unit - As a user of the legal consultation platform, I want to be able to add comments to the document breakdown units I have been assigned to. Only one level of replies will be available, replies to the original comment. ![RU - lege - integral - cu comentarii - open](https://user-images.githubusercontent.com/15039873/58745016-8f902b00-83ff-11e9-9e49-5e6d9f71da47.png)
process
add comments to document breakdown unit as a user of the legal consultation platform i want to be able to add comments to the document breakdown units i have been assigned to only one level of replies will be available replies to the original comment
1
696,340
23,898,144,363
IssuesEvent
2022-09-08 16:17:39
ntop/ntopng
https://api.github.com/repos/ntop/ntopng
opened
Enhance ICMP Traffic Report
Feature Request Priority Ticket
<img width="994" alt="Screenshot 2022-09-08 at 18 15 55" src="https://user-images.githubusercontent.com/4493366/189173337-b3d839d3-4601-46d6-87c5-645ce82ed12b.png"> The above report is very limited. We should at least - have a real-time ICMP report with top flows - add top X hosts that created ICMP traffic
1.0
Enhance ICMP Traffic Report - <img width="994" alt="Screenshot 2022-09-08 at 18 15 55" src="https://user-images.githubusercontent.com/4493366/189173337-b3d839d3-4601-46d6-87c5-645ce82ed12b.png"> The above report is very limited. We should at least - have a real-time ICMP report with top flows - add top X hosts that created ICMP traffic
non_process
enhance icmp traffic report img width alt screenshot at src the above report is very limited we should at least have a real time icmp report with top flows add top x hosts that created icmp traffic
0
177,054
21,464,531,334
IssuesEvent
2022-04-26 01:19:25
EcommEasy/EcommEasy-Admin
https://api.github.com/repos/EcommEasy/EcommEasy-Admin
closed
CVE-2018-11693 (High) detected in node-sass-v4.12.0 - autoclosed
security vulnerability
## CVE-2018-11693 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.12.0</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/expand.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/color_maps.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_util.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8/unchecked.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/output.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_values.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/util.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/emitter.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/lexer.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_node.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/plugins.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/base.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/position.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/subset_map.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operation.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/error_handling.hpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_importer_bridge.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/contrib/plugin.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/functions.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_superselector.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/eval.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8_string.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_context_wrapper.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/error_handling.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/node.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/parser.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/subset_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/emitter.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/listize.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_functions.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/output.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/functions.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cssize.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/prelexer.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/paths.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/inspect.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/color.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_unification.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/values.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_util.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/source_map.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/list.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/json.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/units.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/units.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/context.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8/checked.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/listize.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/string.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/prelexer.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/context.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/boolean.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass2scss.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/eval.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/expand.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/factory.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operators.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/boolean.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/source_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/value.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8_string.cpp - /EcommEasy-Admin/node_modules/node-sass/src/callback_bridge.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/file.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/node.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/environment.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/extend.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_context.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operators.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/constants.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/parser.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/constants.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/list.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cssize.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/functions.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/util.cpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_function_bridge.cpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_importer_bridge.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/bind.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/inspect.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_functions.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/backtrace.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/extend.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/debugger.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cencode.c - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/base64vlq.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/number.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/color.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/c99func.c - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/position.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_values.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/values.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_subset_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass2scss.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/null.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/context.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_c.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_value.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/color_maps.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_context_wrapper.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/script/test-leaks.pl - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/lexer.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_c.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_value.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/b64/encode.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/file.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/environment.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/plugins.hpp - /EcommEasy-Admin/node_modules/node-sass/src/binding.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_context.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/debug.hpp </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693>CVE-2018-11693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-11693 (High) detected in node-sass-v4.12.0 - autoclosed - ## CVE-2018-11693 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.12.0</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/expand.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/color_maps.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_util.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8/unchecked.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/output.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_values.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/util.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/emitter.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/lexer.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_node.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/plugins.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/base.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/position.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/subset_map.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operation.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/error_handling.hpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_importer_bridge.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/contrib/plugin.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/functions.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_superselector.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/eval.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8_string.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_context_wrapper.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/error_handling.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/node.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/parser.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/subset_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/emitter.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/listize.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_functions.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/output.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/functions.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cssize.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/prelexer.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/paths.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/inspect.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/color.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_unification.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/values.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_util.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/source_map.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/list.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/json.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/units.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/units.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/context.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8/checked.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/listize.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/string.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/prelexer.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/context.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/boolean.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass2scss.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/eval.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/expand.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/factory.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operators.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/boolean.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/source_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/value.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/utf8_string.cpp - /EcommEasy-Admin/node_modules/node-sass/src/callback_bridge.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/file.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/node.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/environment.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/extend.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_context.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/operators.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/constants.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/parser.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/constants.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/list.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cssize.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/functions.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/util.cpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_function_bridge.cpp - /EcommEasy-Admin/node_modules/node-sass/src/custom_importer_bridge.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/bind.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/inspect.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_functions.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/backtrace.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/extend.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/debugger.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/cencode.c - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/base64vlq.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/number.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/color.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/c99func.c - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/position.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_values.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/values.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/test/test_subset_map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass2scss.cpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/null.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/ast.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/include/sass/context.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_c.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_value.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/color_maps.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_context_wrapper.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/script/test-leaks.pl - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/lexer.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_c.hpp - /EcommEasy-Admin/node_modules/node-sass/src/sass_types/map.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/to_value.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/b64/encode.h - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/file.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/environment.hpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/plugins.hpp - /EcommEasy-Admin/node_modules/node-sass/src/binding.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/sass_context.cpp - /EcommEasy-Admin/node_modules/node-sass/src/libsass/src/debug.hpp </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693>CVE-2018-11693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in node sass autoclosed cve high severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries ecommeasy admin node modules node sass src libsass src expand hpp ecommeasy admin node modules node sass src libsass src color maps cpp ecommeasy admin node modules node sass src libsass src sass util hpp ecommeasy admin node modules node sass src libsass src unchecked h ecommeasy admin node modules node sass src libsass src output hpp ecommeasy admin node modules node sass src libsass src sass values hpp ecommeasy admin node modules node sass src libsass src util hpp ecommeasy admin node modules node sass src libsass src emitter hpp ecommeasy admin node modules node sass src libsass src lexer cpp ecommeasy admin node modules node sass src libsass test test node cpp ecommeasy admin node modules node sass src libsass src plugins cpp ecommeasy admin node modules node sass src libsass include sass base h ecommeasy admin node modules node sass src libsass src position hpp ecommeasy admin node modules node sass src libsass src subset map hpp ecommeasy admin node modules node sass src libsass src operation hpp ecommeasy admin node modules node sass src libsass src remove placeholders cpp ecommeasy admin node modules node sass src libsass src error handling hpp ecommeasy admin node modules node sass src custom importer bridge cpp ecommeasy admin node modules node sass src libsass contrib plugin cpp ecommeasy admin node modules node sass src libsass src functions hpp ecommeasy admin node modules node sass src libsass test test superselector cpp ecommeasy admin node modules node sass src libsass src eval hpp ecommeasy admin node modules node sass src libsass src string hpp ecommeasy admin node modules node sass src sass context wrapper h ecommeasy admin node modules node sass src libsass src error handling cpp ecommeasy admin node modules node sass src libsass src node cpp ecommeasy admin node modules node sass src libsass src parser cpp ecommeasy admin node modules node sass src libsass src subset map cpp ecommeasy admin node modules node sass src libsass src emitter cpp ecommeasy admin node modules node sass src libsass src listize cpp ecommeasy admin node modules node sass src libsass src ast hpp ecommeasy admin node modules node sass src libsass src sass functions hpp ecommeasy admin node modules node sass src libsass src memory sharedptr cpp ecommeasy admin node modules node sass src libsass src output cpp ecommeasy admin node modules node sass src libsass src check nesting cpp ecommeasy admin node modules node sass src libsass src ast def macros hpp ecommeasy admin node modules node sass src libsass src functions cpp ecommeasy admin node modules node sass src libsass src cssize hpp ecommeasy admin node modules node sass src libsass src prelexer cpp ecommeasy admin node modules node sass src libsass src paths hpp ecommeasy admin node modules node sass src libsass src ast fwd decl hpp ecommeasy admin node modules node sass src libsass src inspect hpp ecommeasy admin node modules node sass src sass types color cpp ecommeasy admin node modules node sass src libsass test test unification cpp ecommeasy admin node modules node sass src libsass src values cpp ecommeasy admin node modules node sass src libsass src sass util cpp ecommeasy admin node modules node sass src libsass src source map hpp ecommeasy admin node modules node sass src sass types list h ecommeasy admin node modules node sass src libsass src check nesting hpp ecommeasy admin node modules node sass src libsass src json cpp ecommeasy admin node modules node sass src libsass src units cpp ecommeasy admin node modules node sass src libsass src units hpp ecommeasy admin node modules node sass src libsass src context cpp ecommeasy admin node modules node sass src libsass src checked h ecommeasy admin node modules node sass src libsass src listize hpp ecommeasy admin node modules node sass src sass types string cpp ecommeasy admin node modules node sass src libsass src prelexer hpp ecommeasy admin node modules node sass src libsass src context hpp ecommeasy admin node modules node sass src sass types boolean h ecommeasy admin node modules node sass src libsass include h ecommeasy admin node modules node sass src libsass src eval cpp ecommeasy admin node modules node sass src libsass src expand cpp ecommeasy admin node modules node sass src sass types factory cpp ecommeasy admin node modules node sass src libsass src operators cpp ecommeasy admin node modules node sass src sass types boolean cpp ecommeasy admin node modules node sass src libsass src source map cpp ecommeasy admin node modules node sass src sass types value h ecommeasy admin node modules node sass src libsass src string cpp ecommeasy admin node modules node sass src callback bridge h ecommeasy admin node modules node sass src libsass src file cpp ecommeasy admin node modules node sass src libsass src sass cpp ecommeasy admin node modules node sass src libsass src node hpp ecommeasy admin node modules node sass src libsass src environment cpp ecommeasy admin node modules node sass src libsass src extend hpp ecommeasy admin node modules node sass src libsass src sass context hpp ecommeasy admin node modules node sass src libsass src operators hpp ecommeasy admin node modules node sass src libsass src constants hpp ecommeasy admin node modules node sass src libsass src sass hpp ecommeasy admin node modules node sass src libsass src ast fwd decl cpp ecommeasy admin node modules node sass src libsass src parser hpp ecommeasy admin node modules node sass src libsass src constants cpp ecommeasy admin node modules node sass src sass types list cpp ecommeasy admin node modules node sass src libsass src cssize cpp ecommeasy admin node modules node sass src libsass include sass functions h ecommeasy admin node modules node sass src libsass src util cpp ecommeasy admin node modules node sass src custom function bridge cpp ecommeasy admin node modules node sass src custom importer bridge h ecommeasy admin node modules node sass src libsass src bind cpp ecommeasy admin node modules node sass src libsass src inspect cpp ecommeasy admin node modules node sass src libsass src sass functions cpp ecommeasy admin node modules node sass src libsass src backtrace cpp ecommeasy admin node modules node sass src libsass src extend cpp ecommeasy admin node modules node sass src sass types sass value wrapper h ecommeasy admin node modules node sass src libsass src debugger hpp ecommeasy admin node modules node sass src libsass src cencode c ecommeasy admin node modules node sass src libsass src cpp ecommeasy admin node modules node sass src sass types number cpp ecommeasy admin node modules node sass src sass types color h ecommeasy admin node modules node sass src libsass src c ecommeasy admin node modules node sass src libsass src position cpp ecommeasy admin node modules node sass src libsass src remove placeholders hpp ecommeasy admin node modules node sass src libsass src sass values cpp ecommeasy admin node modules node sass src libsass include sass values h ecommeasy admin node modules node sass src libsass test test subset map cpp ecommeasy admin node modules node sass src libsass src cpp ecommeasy admin node modules node sass src sass types null cpp ecommeasy admin node modules node sass src libsass src ast cpp ecommeasy admin node modules node sass src libsass include sass context h ecommeasy admin node modules node sass src libsass src to c cpp ecommeasy admin node modules node sass src libsass src to value hpp ecommeasy admin node modules node sass src libsass src color maps hpp ecommeasy admin node modules node sass src sass context wrapper cpp ecommeasy admin node modules node sass src libsass script test leaks pl ecommeasy admin node modules node sass src libsass src lexer hpp ecommeasy admin node modules node sass src libsass src memory sharedptr hpp ecommeasy admin node modules node sass src libsass src to c hpp ecommeasy admin node modules node sass src sass types map cpp ecommeasy admin node modules node sass src libsass src to value cpp ecommeasy admin node modules node sass src libsass src encode h ecommeasy admin node modules node sass src libsass src file hpp ecommeasy admin node modules node sass src libsass src environment hpp ecommeasy admin node modules node sass src libsass src plugins hpp ecommeasy admin node modules node sass src binding cpp ecommeasy admin node modules node sass src libsass src sass context cpp ecommeasy admin node modules node sass src libsass src debug hpp vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer skip over scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
0
21,958
30,454,236,357
IssuesEvent
2023-07-16 17:28:05
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Detective Lucerne from "Columbo" (Screenshots added)
suggested title in process
Please add as much of the following info as you can: Title: Detective Lucerne Type (film/tv show): TV show* - detective mystery (During the beginning, it's said it's a show, but the part where Ward is walking into the room, one of the crew says they're filming a movie. I'm thinking that part might be the movie based on the show. Credit as you see fit.) Film or show in which it appears: Columbo Is the parent film/show streaming anywhere? Yes - Peacock & Amazon Prime About when in the parent film/show does it appear? Ep. 6x01 = "Fade In to Murder" Actual footage of the film/show can be seen (yes/no)? Timestamp: - Beginning - 1:39 - 18:43 - 19:49 Synopsis (written by me): The shrewd and intelligent Detective Lucerne is a modern day Sherlock Holmes, solving numerous murders among affluent members of high society. Starring: Ward Fowler Producer: Claire Daley ![Detective Lucerne 1](https://user-images.githubusercontent.com/88982629/190523040-f82412ab-886e-45c8-ab6a-f7e53a6a861b.jpg) ![Detective Lucerne 2](https://user-images.githubusercontent.com/88982629/190523041-62f340e8-7212-4ae6-8f71-7664cade3a62.jpg) ![Detective Lucerne 3](https://user-images.githubusercontent.com/88982629/190523043-df937515-6ac4-49db-9889-8d5e299239f4.jpg) ![Detective Lucerne 4](https://user-images.githubusercontent.com/88982629/190523044-86cd3080-bc97-400b-b605-745e43f013df.jpg) ![Detective Lucerne 5](https://user-images.githubusercontent.com/88982629/190523046-8e3fbf18-5755-46a1-aa09-92e89cf98fce.jpg) ![Detective Lucerne 6](https://user-images.githubusercontent.com/88982629/190523047-6deafc75-148d-4742-b630-94f72a826953.jpg) ![Detective Lucerne 7](https://user-images.githubusercontent.com/88982629/190523048-1ca6f6dc-966e-4dd1-9db1-d212ec07089b.jpg) ![Detective Lucerne 8](https://user-images.githubusercontent.com/88982629/190523049-5dbb8443-8325-46eb-9276-7fc094193abf.jpg)
1.0
Add Detective Lucerne from "Columbo" (Screenshots added) - Please add as much of the following info as you can: Title: Detective Lucerne Type (film/tv show): TV show* - detective mystery (During the beginning, it's said it's a show, but the part where Ward is walking into the room, one of the crew says they're filming a movie. I'm thinking that part might be the movie based on the show. Credit as you see fit.) Film or show in which it appears: Columbo Is the parent film/show streaming anywhere? Yes - Peacock & Amazon Prime About when in the parent film/show does it appear? Ep. 6x01 = "Fade In to Murder" Actual footage of the film/show can be seen (yes/no)? Timestamp: - Beginning - 1:39 - 18:43 - 19:49 Synopsis (written by me): The shrewd and intelligent Detective Lucerne is a modern day Sherlock Holmes, solving numerous murders among affluent members of high society. Starring: Ward Fowler Producer: Claire Daley ![Detective Lucerne 1](https://user-images.githubusercontent.com/88982629/190523040-f82412ab-886e-45c8-ab6a-f7e53a6a861b.jpg) ![Detective Lucerne 2](https://user-images.githubusercontent.com/88982629/190523041-62f340e8-7212-4ae6-8f71-7664cade3a62.jpg) ![Detective Lucerne 3](https://user-images.githubusercontent.com/88982629/190523043-df937515-6ac4-49db-9889-8d5e299239f4.jpg) ![Detective Lucerne 4](https://user-images.githubusercontent.com/88982629/190523044-86cd3080-bc97-400b-b605-745e43f013df.jpg) ![Detective Lucerne 5](https://user-images.githubusercontent.com/88982629/190523046-8e3fbf18-5755-46a1-aa09-92e89cf98fce.jpg) ![Detective Lucerne 6](https://user-images.githubusercontent.com/88982629/190523047-6deafc75-148d-4742-b630-94f72a826953.jpg) ![Detective Lucerne 7](https://user-images.githubusercontent.com/88982629/190523048-1ca6f6dc-966e-4dd1-9db1-d212ec07089b.jpg) ![Detective Lucerne 8](https://user-images.githubusercontent.com/88982629/190523049-5dbb8443-8325-46eb-9276-7fc094193abf.jpg)
process
add detective lucerne from columbo screenshots added please add as much of the following info as you can title detective lucerne type film tv show tv show detective mystery during the beginning it s said it s a show but the part where ward is walking into the room one of the crew says they re filming a movie i m thinking that part might be the movie based on the show credit as you see fit film or show in which it appears columbo is the parent film show streaming anywhere yes peacock amazon prime about when in the parent film show does it appear ep fade in to murder actual footage of the film show can be seen yes no timestamp beginning synopsis written by me the shrewd and intelligent detective lucerne is a modern day sherlock holmes solving numerous murders among affluent members of high society starring ward fowler producer claire daley
1
161,600
13,854,584,453
IssuesEvent
2020-10-15 09:42:58
redhairedcelt/AIS_project
https://api.github.com/repos/redhairedcelt/AIS_project
closed
Improvements to DBSCAN
documentation enhancement
- Likely DBSCAN is the best implementation. - Look at OPTICS though since it might find smaller clusters better - Look at new mthod for finding optimal value of epsilon (https://towardsdatascience.com/machine-learning-clustering-dbscan-determine-the-optimal-value-for-epsilon-eps-python-example-3100091cfbc) - Would Gaussian mixture models successfully identify anamalous ship traffic?
1.0
Improvements to DBSCAN - - Likely DBSCAN is the best implementation. - Look at OPTICS though since it might find smaller clusters better - Look at new mthod for finding optimal value of epsilon (https://towardsdatascience.com/machine-learning-clustering-dbscan-determine-the-optimal-value-for-epsilon-eps-python-example-3100091cfbc) - Would Gaussian mixture models successfully identify anamalous ship traffic?
non_process
improvements to dbscan likely dbscan is the best implementation look at optics though since it might find smaller clusters better look at new mthod for finding optimal value of epsilon would gaussian mixture models successfully identify anamalous ship traffic
0
43,997
13,046,219,285
IssuesEvent
2020-07-29 08:38:19
orhanarifoglu/bitti
https://api.github.com/repos/orhanarifoglu/bitti
opened
CVE-2017-15708 (High) detected in commons-collections-3.2.jar
security vulnerability
## CVE-2017-15708 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Library home page: <a href="http://jakarta.apache.org/commons/collections/">http://jakarta.apache.org/commons/collections/</a></p> <p>Path to vulnerable library: _depth_0/bitti/build/distributions/gradleproject2/gradleproject2/lib/commons-collections-3.2.jar,canner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2/f951934aa5ae5a88d7e6dfaa6d32307d834a88be/commons-collections-3.2.jar</p> <p> Dependency Hierarchy: - :x: **commons-collections-3.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/orhanarifoglu/bitti/commit/b9de9df64013494e4cad4e932f22f2c53e6e45c0">b9de9df64013494e4cad4e932f22f2c53e6e45c0</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Synapse, by default no authentication is required for Java Remote Method Invocation (RMI). So Apache Synapse 3.0.1 or all previous releases (3.0.0, 2.1.0, 2.0.0, 1.2, 1.1.2, 1.1.1) allows remote code execution attacks that can be performed by injecting specially crafted serialized objects. And the presence of Apache Commons Collections 3.2.1 (commons-collections-3.2.1.jar) or previous versions in Synapse distribution makes this exploitable. To mitigate the issue, we need to limit RMI access to trusted users only. Further upgrading to 3.0.1 version will eliminate the risk of having said Commons Collection version. In Synapse 3.0.1, Commons Collection has been updated to 3.2.2 version. <p>Publish Date: 2017-12-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15708>CVE-2017-15708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708</a></p> <p>Release Date: 2017-12-11</p> <p>Fix Resolution: org.apache.synapse:Apache-Synapse:3.0.1;commons-collections:commons-collections:3.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-15708 (High) detected in commons-collections-3.2.jar - ## CVE-2017-15708 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Library home page: <a href="http://jakarta.apache.org/commons/collections/">http://jakarta.apache.org/commons/collections/</a></p> <p>Path to vulnerable library: _depth_0/bitti/build/distributions/gradleproject2/gradleproject2/lib/commons-collections-3.2.jar,canner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2/f951934aa5ae5a88d7e6dfaa6d32307d834a88be/commons-collections-3.2.jar</p> <p> Dependency Hierarchy: - :x: **commons-collections-3.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/orhanarifoglu/bitti/commit/b9de9df64013494e4cad4e932f22f2c53e6e45c0">b9de9df64013494e4cad4e932f22f2c53e6e45c0</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Synapse, by default no authentication is required for Java Remote Method Invocation (RMI). So Apache Synapse 3.0.1 or all previous releases (3.0.0, 2.1.0, 2.0.0, 1.2, 1.1.2, 1.1.1) allows remote code execution attacks that can be performed by injecting specially crafted serialized objects. And the presence of Apache Commons Collections 3.2.1 (commons-collections-3.2.1.jar) or previous versions in Synapse distribution makes this exploitable. To mitigate the issue, we need to limit RMI access to trusted users only. Further upgrading to 3.0.1 version will eliminate the risk of having said Commons Collection version. In Synapse 3.0.1, Commons Collection has been updated to 3.2.2 version. <p>Publish Date: 2017-12-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15708>CVE-2017-15708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708</a></p> <p>Release Date: 2017-12-11</p> <p>Fix Resolution: org.apache.synapse:Apache-Synapse:3.0.1;commons-collections:commons-collections:3.2.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework library home page a href path to vulnerable library depth bitti build distributions lib commons collections jar canner gradle caches modules files commons collections commons collections commons collections jar dependency hierarchy x commons collections jar vulnerable library found in head commit a href vulnerability details in apache synapse by default no authentication is required for java remote method invocation rmi so apache synapse or all previous releases allows remote code execution attacks that can be performed by injecting specially crafted serialized objects and the presence of apache commons collections commons collections jar or previous versions in synapse distribution makes this exploitable to mitigate the issue we need to limit rmi access to trusted users only further upgrading to version will eliminate the risk of having said commons collection version in synapse commons collection has been updated to version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache synapse apache synapse commons collections commons collections step up your open source security game with whitesource
0
95,117
3,934,320,615
IssuesEvent
2016-04-25 22:10:50
ngageoint/hootenanny
https://api.github.com/repos/ngageoint/hootenanny
opened
Modify release script to re-run tests after a local merge and before pushing the merge changes
Category: Release Priority: Medium Status: Defined Type: Task
If this isn't done, the code could be corrupted by a bad auto-merge in the tagged release (has happened before).
1.0
Modify release script to re-run tests after a local merge and before pushing the merge changes - If this isn't done, the code could be corrupted by a bad auto-merge in the tagged release (has happened before).
non_process
modify release script to re run tests after a local merge and before pushing the merge changes if this isn t done the code could be corrupted by a bad auto merge in the tagged release has happened before
0
15,038
18,760,116,587
IssuesEvent
2021-11-05 15:31:14
bridgetownrb/bridgetown
https://api.github.com/repos/bridgetownrb/bridgetown
closed
feat: Investigate a centralized, configurable plugin autoloading system (Zeitwerk)
enhancement process
The current manner in which any `.rb` file in `./plugins` is simply `load`ed (and re`load`ed on subsequent changes) is out of step with other parts of the system which use Zeitwerk. In addition, there's no way to easily add new paths to the Zeitwerk autoloader or to trigger a reload from an arbitrary part of the system (based on watching files in some particular folder). It seems to me it'd make sense to double-down on Zeitwerk, ensure there's a centralized/configurable ability to set up load paths, and use that in as many places as necessary. This would be a **breaking change** however as currently files in `plugins` are entirely arbitrary. You could have `foo.rb` define `class Bar`, or not even have a class in it at all. Once Zeitwerk is in place, it would enforce all the usual naming conventions we have elsewhere, in Rails, etc. But I think ultimately that's desirable anyway.
1.0
feat: Investigate a centralized, configurable plugin autoloading system (Zeitwerk) - The current manner in which any `.rb` file in `./plugins` is simply `load`ed (and re`load`ed on subsequent changes) is out of step with other parts of the system which use Zeitwerk. In addition, there's no way to easily add new paths to the Zeitwerk autoloader or to trigger a reload from an arbitrary part of the system (based on watching files in some particular folder). It seems to me it'd make sense to double-down on Zeitwerk, ensure there's a centralized/configurable ability to set up load paths, and use that in as many places as necessary. This would be a **breaking change** however as currently files in `plugins` are entirely arbitrary. You could have `foo.rb` define `class Bar`, or not even have a class in it at all. Once Zeitwerk is in place, it would enforce all the usual naming conventions we have elsewhere, in Rails, etc. But I think ultimately that's desirable anyway.
process
feat investigate a centralized configurable plugin autoloading system zeitwerk the current manner in which any rb file in plugins is simply load ed and re load ed on subsequent changes is out of step with other parts of the system which use zeitwerk in addition there s no way to easily add new paths to the zeitwerk autoloader or to trigger a reload from an arbitrary part of the system based on watching files in some particular folder it seems to me it d make sense to double down on zeitwerk ensure there s a centralized configurable ability to set up load paths and use that in as many places as necessary this would be a breaking change however as currently files in plugins are entirely arbitrary you could have foo rb define class bar or not even have a class in it at all once zeitwerk is in place it would enforce all the usual naming conventions we have elsewhere in rails etc but i think ultimately that s desirable anyway
1
16,217
20,743,141,883
IssuesEvent
2022-03-14 19:46:18
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
External HTTPS load balancer that will allow it to negotiate HTTP/3
Deployment Help needed Process: Clarification
Hi! Deployed version: 2.0.5 We've received the following email from GCP: > We are rolling out a change to the External HTTPS load balancer that will allow it to negotiate HTTP/3 if no quicOverride setting has been applied. > Hello Google External HTTPS Load Balancing Customer, > > We are writing to let you know that the Global External HTTPS Load Balancers will change the week of March 21, 2022 so that HTTP/3 is enabled by default unless it is explicitly disabled in the load balancer’s configuration. Some of your projects are configured with Global External HTTPS Load Balancing and they might be affected by this update. > > What do I need to know? > > We are rolling out a change to the Global External HTTPS Load Balancer, allowing it to negotiate HTTP/3 if no quicOverride setting has been applied. > > You can control whether or not your load balancer will negotiate HTTP/3 by configuring the [QUIC protocol support](https://notifications.google.com/g/p/AD-FnExhI6wG31RFpdVYlX6EmvJVh9FpAUqTn1MLDIl0D8MwpwEezsCmTW6sGAk-hn8SPijAkqoL398ImQG7oESbWWVCj42EhWhTady5xNG_2Zedql2G5tVcVQ5r5Q) on your load balancer. (QUIC is the [standardized transport](https://notifications.google.com/g/p/AD-FnEx0CnbJIP4Zen10dFrnO8f8iQG47zOPfEgBGf8CfNtqvdJ9sK7EmcZlSAuUt-poebdjRH1ryP13m-D6-p1fPBRwbZORXkzZCv5ZX__Vgou9) used by HTTP/3.) > > HTTP/3 is a new version of HTTP that has been supported by the External HTTPS Load Balancer [since June 2021](https://notifications.google.com/g/p/AD-FnEwM5wJzmmFz4TyBqIApi1VKayAPT2gzk0EFj9RaDUnmYiM9gEDhF5JIU4QFPERJpe5PGylquvPaHEYIEKdkWkQS3UMdQUAsZv2D-rk59aNyi1XfjiDRoOR72m43ZPFFm_pEdnyEoQKCJHRGAt2mtAPsozB6x2Ak-2cG0SfI3hg). With HTTP/3 support, clients that support HTTP/3 can use it to communicate with the load balancer. This HTTP/3 support does not change the protocol used between the load balancer and backends. > > Clients that do not support HTTP/3 are unaffected by this change. They can continue to use HTTP or HTTP2 when connecting to the load balancer. > > What do I need to do? > > No action is required for your load balancer to make use of HTTP/3. However, you can perform the following actions: > > If you want to prevent your load balancer from using HTTP/3, you can set quicOverride in TargetHttpsProxy to DISABLE. > If you want to test HTTP/3 prior to it being enabled, you can set quicOverride to ENABLE. > If you want to verify that HTTP/3 is enabled in your load balancer, you can inspect the β€˜alt-svc’ header in an HTTPS response from the load balancer. If this contains β€˜h3’ then HTTP/3 is enabled. > What happens if I don't do anything? > > If you have not specified a quicOverride setting, with this change your load balancer will allow clients to use HTTP/3. Can this affect our deployment? Do we need to do any actions according to these changes? Won't this affect any security aspects?
1.0
External HTTPS load balancer that will allow it to negotiate HTTP/3 - Hi! Deployed version: 2.0.5 We've received the following email from GCP: > We are rolling out a change to the External HTTPS load balancer that will allow it to negotiate HTTP/3 if no quicOverride setting has been applied. > Hello Google External HTTPS Load Balancing Customer, > > We are writing to let you know that the Global External HTTPS Load Balancers will change the week of March 21, 2022 so that HTTP/3 is enabled by default unless it is explicitly disabled in the load balancer’s configuration. Some of your projects are configured with Global External HTTPS Load Balancing and they might be affected by this update. > > What do I need to know? > > We are rolling out a change to the Global External HTTPS Load Balancer, allowing it to negotiate HTTP/3 if no quicOverride setting has been applied. > > You can control whether or not your load balancer will negotiate HTTP/3 by configuring the [QUIC protocol support](https://notifications.google.com/g/p/AD-FnExhI6wG31RFpdVYlX6EmvJVh9FpAUqTn1MLDIl0D8MwpwEezsCmTW6sGAk-hn8SPijAkqoL398ImQG7oESbWWVCj42EhWhTady5xNG_2Zedql2G5tVcVQ5r5Q) on your load balancer. (QUIC is the [standardized transport](https://notifications.google.com/g/p/AD-FnEx0CnbJIP4Zen10dFrnO8f8iQG47zOPfEgBGf8CfNtqvdJ9sK7EmcZlSAuUt-poebdjRH1ryP13m-D6-p1fPBRwbZORXkzZCv5ZX__Vgou9) used by HTTP/3.) > > HTTP/3 is a new version of HTTP that has been supported by the External HTTPS Load Balancer [since June 2021](https://notifications.google.com/g/p/AD-FnEwM5wJzmmFz4TyBqIApi1VKayAPT2gzk0EFj9RaDUnmYiM9gEDhF5JIU4QFPERJpe5PGylquvPaHEYIEKdkWkQS3UMdQUAsZv2D-rk59aNyi1XfjiDRoOR72m43ZPFFm_pEdnyEoQKCJHRGAt2mtAPsozB6x2Ak-2cG0SfI3hg). With HTTP/3 support, clients that support HTTP/3 can use it to communicate with the load balancer. This HTTP/3 support does not change the protocol used between the load balancer and backends. > > Clients that do not support HTTP/3 are unaffected by this change. They can continue to use HTTP or HTTP2 when connecting to the load balancer. > > What do I need to do? > > No action is required for your load balancer to make use of HTTP/3. However, you can perform the following actions: > > If you want to prevent your load balancer from using HTTP/3, you can set quicOverride in TargetHttpsProxy to DISABLE. > If you want to test HTTP/3 prior to it being enabled, you can set quicOverride to ENABLE. > If you want to verify that HTTP/3 is enabled in your load balancer, you can inspect the β€˜alt-svc’ header in an HTTPS response from the load balancer. If this contains β€˜h3’ then HTTP/3 is enabled. > What happens if I don't do anything? > > If you have not specified a quicOverride setting, with this change your load balancer will allow clients to use HTTP/3. Can this affect our deployment? Do we need to do any actions according to these changes? Won't this affect any security aspects?
process
external https load balancer that will allow it to negotiate http hi deployed version we ve received the following email from gcp we are rolling out a change to the external https load balancer that will allow it to negotiate http if no quicoverride setting has been applied hello google external https load balancing customer we are writing to let you know that the global external https load balancers will change the week of march so that http is enabled by default unless it is explicitly disabled in the load balancer’s configuration some of your projects are configured with global external https load balancing and they might be affected by this update what do i need to know we are rolling out a change to the global external https load balancer allowing it to negotiate http if no quicoverride setting has been applied you can control whether or not your load balancer will negotiate http by configuring the on your load balancer quic is the used by http http is a new version of http that has been supported by the external https load balancer with http support clients that support http can use it to communicate with the load balancer this http support does not change the protocol used between the load balancer and backends clients that do not support http are unaffected by this change they can continue to use http or when connecting to the load balancer what do i need to do no action is required for your load balancer to make use of http however you can perform the following actions if you want to prevent your load balancer from using http you can set quicoverride in targethttpsproxy to disable if you want to test http prior to it being enabled you can set quicoverride to enable if you want to verify that http is enabled in your load balancer you can inspect the β€˜alt svc’ header in an https response from the load balancer if this contains β€˜ ’ then http is enabled what happens if i don t do anything if you have not specified a quicoverride setting with this change your load balancer will allow clients to use http can this affect our deployment do we need to do any actions according to these changes won t this affect any security aspects
1
3,476
6,552,992,974
IssuesEvent
2017-09-05 20:35:14
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
Simplify installation
glorious future help wanted processed story
One of the core goals of the project is to provide an easy installation process. Due to several factors the install instructions have grown to cover several pages: https://github.com/pelias/openstreetmap I propose that we provide a `pelias` binary file which handles configuring, testing settings are correct and running of imports. If there are any other ways of reducing complexity we should explore them also.
1.0
Simplify installation - One of the core goals of the project is to provide an easy installation process. Due to several factors the install instructions have grown to cover several pages: https://github.com/pelias/openstreetmap I propose that we provide a `pelias` binary file which handles configuring, testing settings are correct and running of imports. If there are any other ways of reducing complexity we should explore them also.
process
simplify installation one of the core goals of the project is to provide an easy installation process due to several factors the install instructions have grown to cover several pages i propose that we provide a pelias binary file which handles configuring testing settings are correct and running of imports if there are any other ways of reducing complexity we should explore them also
1
697,783
23,952,542,994
IssuesEvent
2022-09-12 12:43:42
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
m.youtube.com - see bug description
priority-critical browser-fenix engine-gecko
<!-- @browser: Firefox Mobile 77.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/110602 --> <!-- @extra_labels: browser-fenix --> **URL**: https://m.youtube.com/watch?v=Qsp3Z25u6VI **Browser / Version**: Firefox Mobile 77.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Pas de lecture en arrière plan. Toutefois disponible sur firefox Classique. Shame on you. **Steps to Reproduce**: Already said. Bad job. Pas de lecture en arrière plan disponible sur Firefox Nightly ( Et ça les gars, ça pue, ça dérange, ça craint SURTOUT SUR SMARTPHONES... du coup hop je désinstalle direct : Legit ) Signé : Nightly Community. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀️_
1.0
m.youtube.com - see bug description - <!-- @browser: Firefox Mobile 77.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/110602 --> <!-- @extra_labels: browser-fenix --> **URL**: https://m.youtube.com/watch?v=Qsp3Z25u6VI **Browser / Version**: Firefox Mobile 77.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Pas de lecture en arrière plan. Toutefois disponible sur firefox Classique. Shame on you. **Steps to Reproduce**: Already said. Bad job. Pas de lecture en arrière plan disponible sur Firefox Nightly ( Et ça les gars, ça pue, ça dérange, ça craint SURTOUT SUR SMARTPHONES... du coup hop je désinstalle direct : Legit ) Signé : Nightly Community. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀️_
non_process
m youtube com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description pas de lecture en arrière plan toutefois disponible sur firefox classique shame on you steps to reproduce already said bad job pas de lecture en arrière plan disponible sur firefox nightly et ça les gars ça pue ça dérange ça craint surtout sur smartphones du coup hop je désinstalle direct legit signé nightly community browser configuration none from with ❀️
0
665,117
22,299,879,719
IssuesEvent
2022-06-13 07:42:42
kubermatic/kubermatic
https://api.github.com/repos/kubermatic/kubermatic
closed
Tracker: Latest Ubuntu 20.04 cloud images cannot run containers and become unresponsible
kind/bug priority/high
**To resolve this issue, update your `MachineDeployments` to explicitly reference a cloud image, e.g. an older AWS AMI. We believe that the issue was introduced on June 6 or 7. You can also update your `Seed` to default an image. Auto-upgrade solutions for Ubuntu 20.04 nodes should be disabled for the time being.** ### What happened? <!-- Try to provide as much information as possible. If you're reporting a security issue, please check the guidelines for reporting security issues: https://github.com/kubermatic/kubermatic/blob/master/CONTRIBUTING.md#reporting-a-security-vulnerability --> Recent Ubuntu 20.04 cloud images (verified for AWS, other providers might also experience the same problem) result in unresponsive Kubernetes nodes when trying to launch machines in KKP user clusters with it. Newly joined nodes will stay in `NotReady` state. Nodes are also not reachable via SSH. If you manage to access the machine in some way, you'll be discovering a kernel trace. We observed AMIs for `ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220607` to be affected. If no image reference is provided, machine-controller choses the latest image from a trusted cloud provider source, but those images are specifically those affected by the issue. This is an upstream issue tracked here: https://bugs.launchpad.net/ubuntu/+source/linux-aws/+bug/1977919. The issue seems to be with launching containers from a container runtime in general (both docker and containerd are affected). The stack trace from that issue is copied below for discoverability, but we were able to observe a very similar trace in our setup: ``` [ 12.314552] VFS: Close: file count is 0 [ 12.351090] ------------[ cut here ]------------ [ 12.351093] kernel BUG at include/linux/fs.h:3104! [ 12.355272] invalid opcode: 0000 [#1] SMP PTI [ 12.358963] CPU: 1 PID: 863 Comm: sed Not tainted 5.13.0-1028-aws #31~20.04.1-Ubuntu [ 12.366241] Hardware name: Amazon EC2 m5.large/, BIOS 1.0 10/16/2017 [ 12.371130] RIP: 0010:__fput+0x247/0x250 [ 12.374897] Code: 00 48 85 ff 0f 84 8b fe ff ff f6 c7 40 0f 85 82 fe ff ff e8 ab 38 00 00 e9 78 fe ff ff 4c 89 f7 e8 2e 88 02 00 e9 b5 fe ff ff <0f> 0b 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 31 db 48 [ 12.389075] RSP: 0018:ffffb50280d9fd88 EFLAGS: 00010246 [ 12.393425] RAX: 0000000000000000 RBX: 00000000000a801d RCX: ffff9152e0716000 [ 12.398679] RDX: ffff9152cf075280 RSI: 0000000000000001 RDI: 0000000000000000 [ 12.403879] RBP: ffffb50280d9fdb0 R08: 0000000000000001 R09: ffff9152dfcba2c8 [ 12.409102] R10: ffffb50280d9fd88 R11: ffff9152d04e9d10 R12: ffff9152d04e9d00 [ 12.414333] R13: ffff9152dfcba2c8 R14: ffff9152cf0752a0 R15: ffff9152dfc2e180 [ 12.419533] FS: 0000000000000000(0000) GS:ffff9153ea900000(0000) knlGS:0000000000000000 [ 12.426937] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 12.431506] CR2: 0000556cf30250a8 CR3: 00000000bce10006 CR4: 00000000007706e0 [ 12.436716] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 12.441941] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 12.447170] PKRU: 55555554 [ 12.450355] Call Trace: [ 12.453408] <TASK> [ 12.456296] ____fput+0xe/0x10 [ 12.459633] task_work_run+0x70/0xb0 [ 12.463157] do_exit+0x37b/0xaf0 [ 12.466570] do_group_exit+0x43/0xb0 [ 12.470142] __x64_sys_exit_group+0x18/0x20 [ 12.473989] do_syscall_64+0x61/0xb0 [ 12.477565] ? exit_to_user_mode_prepare+0x9b/0x1c0 [ 12.481734] ? do_user_addr_fault+0x1d0/0x650 [ 12.485665] ? irqentry_exit_to_user_mode+0x9/0x20 [ 12.489790] ? irqentry_exit+0x19/0x30 [ 12.493443] ? exc_page_fault+0x8f/0x170 [ 12.497199] ? asm_exc_page_fault+0x8/0x30 [ 12.501013] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 12.505289] RIP: 0033:0x7f80d42a1bd6 [ 12.508868] Code: Unable to access opcode bytes at RIP 0x7f80d42a1bac. [ 12.513783] RSP: 002b:00007ffe924f9ed8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 [ 12.520897] RAX: ffffffffffffffda RBX: 00007f80d45a4740 RCX: 00007f80d42a1bd6 [ 12.526115] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000 [ 12.531328] RBP: 0000000000000000 R08: 00000000000000e7 R09: fffffffffffffe98 [ 12.536484] R10: 00007f80d3d422a0 R11: 0000000000000246 R12: 00007f80d45a4740 [ 12.541687] R13: 0000000000000002 R14: 00007f80d45ad708 R15: 0000000000000000 [ 12.546916] </TASK> [ 12.549829] Modules linked in: xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_filter iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c bpfilter br_netfilter bridge stp llc aufs overlay nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua crct10dif_pclmul ppdev crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd psmouse cryptd parport_pc input_leds parport ena serio_raw sch_fq_codel ipmi_devintf ipmi_msghandler msr drm ip_tables x_tables autofs4 [ 12.583913] ---[ end trace 77367fed4d782aa4 ]--- [ 12.587963] RIP: 0010:__fput+0x247/0x250 [ 12.591729] Code: 00 48 85 ff 0f 84 8b fe ff ff f6 c7 40 0f 85 82 fe ff ff e8 ab 38 00 00 e9 78 fe ff ff 4c 89 f7 e8 2e 88 02 00 e9 b5 fe ff ff <0f> 0b 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 31 db 48 [ 12.605796] RSP: 0018:ffffb50280d9fd88 EFLAGS: 00010246 [ 12.610166] RAX: 0000000000000000 RBX: 00000000000a801d RCX: ffff9152e0716000 [ 12.615417] RDX: ffff9152cf075280 RSI: 0000000000000001 RDI: 0000000000000000 [ 12.620635] RBP: ffffb50280d9fdb0 R08: 0000000000000001 R09: ffff9152dfcba2c8 [ 12.625878] R10: ffffb50280d9fd88 R11: ffff9152d04e9d10 R12: ffff9152d04e9d00 [ 12.631121] R13: ffff9152dfcba2c8 R14: ffff9152cf0752a0 R15: ffff9152dfc2e180 [ 12.636358] FS: 0000000000000000(0000) GS:ffff9153ea900000(0000) knlGS:0000000000000000 [ 12.643770] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 12.648355] CR2: 0000556cf30250a8 CR3: 00000000bce10006 CR4: 00000000007706e0 [ 12.653610] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 12.658843] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 12.664076] PKRU: 55555554 [ 12.667279] Fixing recursive fault but reboot is needed! ``` ### Expected behavior <!-- What did you expected to happen? --> Machines should successfully join the cluster and be able to run containers. ### How to reproduce the issue? <!-- Please provide as much information as possible, so we can reproduce the issue on our own. --> - Create an AWS user cluster and do not set an AMI reference. ### How is your environment configured? - KKP version: `master` (all versions expected to be affected) - Shared or separate master/seed clusters?: shared ### Provide your KKP manifest here (if applicable) <!-- Providing an applicable manifest (KubermaticConfiguration, Seed, Cluster or other resources) will help us to reproduce the issue. Please make sure to redact all secrets (e.g. passwords, URLs...)! --> <details> ```yaml # paste manifest here ``` </details> ### What cloud provider are you running on? <!-- AWS, Azure, DigitalOcean, GCP, Hetzner Cloud, Nutanix, OpenStack, Equinix Metal (Packet), VMware vSphere, Other (e.g. baremetal or non-natively supported provider) --> - AWS - GCP (others are affected as well, likely) ### What operating system are you running in your user cluster? <!-- Ubuntu 20.04, CentOS 7, Rocky Linux 8, Flatcar Linux, ... (optional, bug might not be related to user cluster) --> Ubuntu 20.04 ### Additional information <!-- Additional information about the bug you're reporting (optional). --> - Upstream bug: https://bugs.launchpad.net/ubuntu/+source/linux-aws/+bug/1977919
1.0
Tracker: Latest Ubuntu 20.04 cloud images cannot run containers and become unresponsible - **To resolve this issue, update your `MachineDeployments` to explicitly reference a cloud image, e.g. an older AWS AMI. We believe that the issue was introduced on June 6 or 7. You can also update your `Seed` to default an image. Auto-upgrade solutions for Ubuntu 20.04 nodes should be disabled for the time being.** ### What happened? <!-- Try to provide as much information as possible. If you're reporting a security issue, please check the guidelines for reporting security issues: https://github.com/kubermatic/kubermatic/blob/master/CONTRIBUTING.md#reporting-a-security-vulnerability --> Recent Ubuntu 20.04 cloud images (verified for AWS, other providers might also experience the same problem) result in unresponsive Kubernetes nodes when trying to launch machines in KKP user clusters with it. Newly joined nodes will stay in `NotReady` state. Nodes are also not reachable via SSH. If you manage to access the machine in some way, you'll be discovering a kernel trace. We observed AMIs for `ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220607` to be affected. If no image reference is provided, machine-controller choses the latest image from a trusted cloud provider source, but those images are specifically those affected by the issue. This is an upstream issue tracked here: https://bugs.launchpad.net/ubuntu/+source/linux-aws/+bug/1977919. The issue seems to be with launching containers from a container runtime in general (both docker and containerd are affected). The stack trace from that issue is copied below for discoverability, but we were able to observe a very similar trace in our setup: ``` [ 12.314552] VFS: Close: file count is 0 [ 12.351090] ------------[ cut here ]------------ [ 12.351093] kernel BUG at include/linux/fs.h:3104! [ 12.355272] invalid opcode: 0000 [#1] SMP PTI [ 12.358963] CPU: 1 PID: 863 Comm: sed Not tainted 5.13.0-1028-aws #31~20.04.1-Ubuntu [ 12.366241] Hardware name: Amazon EC2 m5.large/, BIOS 1.0 10/16/2017 [ 12.371130] RIP: 0010:__fput+0x247/0x250 [ 12.374897] Code: 00 48 85 ff 0f 84 8b fe ff ff f6 c7 40 0f 85 82 fe ff ff e8 ab 38 00 00 e9 78 fe ff ff 4c 89 f7 e8 2e 88 02 00 e9 b5 fe ff ff <0f> 0b 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 31 db 48 [ 12.389075] RSP: 0018:ffffb50280d9fd88 EFLAGS: 00010246 [ 12.393425] RAX: 0000000000000000 RBX: 00000000000a801d RCX: ffff9152e0716000 [ 12.398679] RDX: ffff9152cf075280 RSI: 0000000000000001 RDI: 0000000000000000 [ 12.403879] RBP: ffffb50280d9fdb0 R08: 0000000000000001 R09: ffff9152dfcba2c8 [ 12.409102] R10: ffffb50280d9fd88 R11: ffff9152d04e9d10 R12: ffff9152d04e9d00 [ 12.414333] R13: ffff9152dfcba2c8 R14: ffff9152cf0752a0 R15: ffff9152dfc2e180 [ 12.419533] FS: 0000000000000000(0000) GS:ffff9153ea900000(0000) knlGS:0000000000000000 [ 12.426937] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 12.431506] CR2: 0000556cf30250a8 CR3: 00000000bce10006 CR4: 00000000007706e0 [ 12.436716] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 12.441941] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 12.447170] PKRU: 55555554 [ 12.450355] Call Trace: [ 12.453408] <TASK> [ 12.456296] ____fput+0xe/0x10 [ 12.459633] task_work_run+0x70/0xb0 [ 12.463157] do_exit+0x37b/0xaf0 [ 12.466570] do_group_exit+0x43/0xb0 [ 12.470142] __x64_sys_exit_group+0x18/0x20 [ 12.473989] do_syscall_64+0x61/0xb0 [ 12.477565] ? exit_to_user_mode_prepare+0x9b/0x1c0 [ 12.481734] ? do_user_addr_fault+0x1d0/0x650 [ 12.485665] ? irqentry_exit_to_user_mode+0x9/0x20 [ 12.489790] ? irqentry_exit+0x19/0x30 [ 12.493443] ? exc_page_fault+0x8f/0x170 [ 12.497199] ? asm_exc_page_fault+0x8/0x30 [ 12.501013] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 12.505289] RIP: 0033:0x7f80d42a1bd6 [ 12.508868] Code: Unable to access opcode bytes at RIP 0x7f80d42a1bac. [ 12.513783] RSP: 002b:00007ffe924f9ed8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 [ 12.520897] RAX: ffffffffffffffda RBX: 00007f80d45a4740 RCX: 00007f80d42a1bd6 [ 12.526115] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000 [ 12.531328] RBP: 0000000000000000 R08: 00000000000000e7 R09: fffffffffffffe98 [ 12.536484] R10: 00007f80d3d422a0 R11: 0000000000000246 R12: 00007f80d45a4740 [ 12.541687] R13: 0000000000000002 R14: 00007f80d45ad708 R15: 0000000000000000 [ 12.546916] </TASK> [ 12.549829] Modules linked in: xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_filter iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c bpfilter br_netfilter bridge stp llc aufs overlay nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua crct10dif_pclmul ppdev crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd psmouse cryptd parport_pc input_leds parport ena serio_raw sch_fq_codel ipmi_devintf ipmi_msghandler msr drm ip_tables x_tables autofs4 [ 12.583913] ---[ end trace 77367fed4d782aa4 ]--- [ 12.587963] RIP: 0010:__fput+0x247/0x250 [ 12.591729] Code: 00 48 85 ff 0f 84 8b fe ff ff f6 c7 40 0f 85 82 fe ff ff e8 ab 38 00 00 e9 78 fe ff ff 4c 89 f7 e8 2e 88 02 00 e9 b5 fe ff ff <0f> 0b 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 53 31 db 48 [ 12.605796] RSP: 0018:ffffb50280d9fd88 EFLAGS: 00010246 [ 12.610166] RAX: 0000000000000000 RBX: 00000000000a801d RCX: ffff9152e0716000 [ 12.615417] RDX: ffff9152cf075280 RSI: 0000000000000001 RDI: 0000000000000000 [ 12.620635] RBP: ffffb50280d9fdb0 R08: 0000000000000001 R09: ffff9152dfcba2c8 [ 12.625878] R10: ffffb50280d9fd88 R11: ffff9152d04e9d10 R12: ffff9152d04e9d00 [ 12.631121] R13: ffff9152dfcba2c8 R14: ffff9152cf0752a0 R15: ffff9152dfc2e180 [ 12.636358] FS: 0000000000000000(0000) GS:ffff9153ea900000(0000) knlGS:0000000000000000 [ 12.643770] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 12.648355] CR2: 0000556cf30250a8 CR3: 00000000bce10006 CR4: 00000000007706e0 [ 12.653610] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 12.658843] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 12.664076] PKRU: 55555554 [ 12.667279] Fixing recursive fault but reboot is needed! ``` ### Expected behavior <!-- What did you expected to happen? --> Machines should successfully join the cluster and be able to run containers. ### How to reproduce the issue? <!-- Please provide as much information as possible, so we can reproduce the issue on our own. --> - Create an AWS user cluster and do not set an AMI reference. ### How is your environment configured? - KKP version: `master` (all versions expected to be affected) - Shared or separate master/seed clusters?: shared ### Provide your KKP manifest here (if applicable) <!-- Providing an applicable manifest (KubermaticConfiguration, Seed, Cluster or other resources) will help us to reproduce the issue. Please make sure to redact all secrets (e.g. passwords, URLs...)! --> <details> ```yaml # paste manifest here ``` </details> ### What cloud provider are you running on? <!-- AWS, Azure, DigitalOcean, GCP, Hetzner Cloud, Nutanix, OpenStack, Equinix Metal (Packet), VMware vSphere, Other (e.g. baremetal or non-natively supported provider) --> - AWS - GCP (others are affected as well, likely) ### What operating system are you running in your user cluster? <!-- Ubuntu 20.04, CentOS 7, Rocky Linux 8, Flatcar Linux, ... (optional, bug might not be related to user cluster) --> Ubuntu 20.04 ### Additional information <!-- Additional information about the bug you're reporting (optional). --> - Upstream bug: https://bugs.launchpad.net/ubuntu/+source/linux-aws/+bug/1977919
non_process
tracker latest ubuntu cloud images cannot run containers and become unresponsible to resolve this issue update your machinedeployments to explicitly reference a cloud image e g an older aws ami we believe that the issue was introduced on june or you can also update your seed to default an image auto upgrade solutions for ubuntu nodes should be disabled for the time being what happened try to provide as much information as possible if you re reporting a security issue please check the guidelines for reporting security issues recent ubuntu cloud images verified for aws other providers might also experience the same problem result in unresponsive kubernetes nodes when trying to launch machines in kkp user clusters with it newly joined nodes will stay in notready state nodes are also not reachable via ssh if you manage to access the machine in some way you ll be discovering a kernel trace we observed amis for ubuntu images hvm ssd ubuntu focal server to be affected if no image reference is provided machine controller choses the latest image from a trusted cloud provider source but those images are specifically those affected by the issue this is an upstream issue tracked here the issue seems to be with launching containers from a container runtime in general both docker and containerd are affected the stack trace from that issue is copied below for discoverability but we were able to observe a very similar trace in our setup vfs close file count is kernel bug at include linux fs h invalid opcode smp pti cpu pid comm sed not tainted aws ubuntu hardware name amazon large bios rip fput code ff fe ff ff fe ff ff ab fe ff ff fe ff ff db rsp eflags rax rbx rcx rdx rsi rdi rbp fs gs knlgs cs ds es pkru call trace fput task work run do exit do group exit sys exit group do syscall exit to user mode prepare do user addr fault irqentry exit to user mode irqentry exit exc page fault asm exc page fault entry syscall after hwframe rip code unable to access opcode bytes at rip rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp modules linked in xt conntrack xt masquerade nf conntrack netlink nfnetlink xfrm user xfrm algo xt addrtype iptable filter iptable nat nf nat nf conntrack nf defrag nf defrag bpfilter br netfilter bridge stp llc aufs overlay nls dm multipath scsi dh rdac scsi dh emc scsi dh alua pclmul ppdev pclmul ghash clmulni intel aesni intel crypto simd psmouse cryptd parport pc input leds parport ena serio raw sch fq codel ipmi devintf ipmi msghandler msr drm ip tables x tables rip fput code ff fe ff ff fe ff ff ab fe ff ff fe ff ff db rsp eflags rax rbx rcx rdx rsi rdi rbp fs gs knlgs cs ds es pkru fixing recursive fault but reboot is needed expected behavior machines should successfully join the cluster and be able to run containers how to reproduce the issue create an aws user cluster and do not set an ami reference how is your environment configured kkp version master all versions expected to be affected shared or separate master seed clusters shared provide your kkp manifest here if applicable providing an applicable manifest kubermaticconfiguration seed cluster or other resources will help us to reproduce the issue please make sure to redact all secrets e g passwords urls yaml paste manifest here what cloud provider are you running on aws gcp others are affected as well likely what operating system are you running in your user cluster ubuntu additional information upstream bug
0
37,371
8,272,820,724
IssuesEvent
2018-09-17 00:39:15
CleverRaven/Cataclysm-DDA
https://api.github.com/repos/CleverRaven/Cataclysm-DDA
closed
Save files are massive compared to compressed versions
(P5 - Long-term) <Suggestion / Discussion> Code: Performance
**Describe the problem** Save files get massive quickly, upwards of 100MB in some cases. **To Reproduce** 1. Make a save file 2. Compress it 3. See that the compressed version is about 1/10'th the size. **Expected behavior** A save file should be generated with space efficiency in mind. Also, load times will be reduced as the program will only need to read and parse less than 1/6'th the information for a reasonable compression technique. **Actual behavior** The save file stores mostly map files which themselves contain mostly things like "t_grass" or "t_dirt". I did a quick scan of how many times "t_grass" was referenced in my 40MB world, found 1,236,377 matches. **Additional information** Even just replacing the "t_grass" with something short will cut file size at the minimum 5-10MB. (my calculation shows "t_grass" takes up approx. 25% of the entire save file). With an all encompassing compression technique, it would probably change my save file from 40MB to <10MB easily. So essentially, I see a map file going from "t_grass","t_grass","t_grass",..... to "032032032" (032 being the in game debug # for "t_grass". If you want to get technical, you could use basic compression technique to replace all "032" with some single character (or two characters if there's large variety in terrain in any particular map) and map it to "032". Which becomes "0:032","000" Even further, you could just use a delimiter to know which section of the file belongs to the coordinates and terrain without specifying "coordinates:[XX,YY,ZZ]" Also, I don't know why but every map file splits a chunk into 4 smaller chunks with a whole new header with version, coordinates, turn_last_touched, etc. This could probably be cut out to make it one whole chunk (might be a little tough because that's some hefty code).
1.0
Save files are massive compared to compressed versions - **Describe the problem** Save files get massive quickly, upwards of 100MB in some cases. **To Reproduce** 1. Make a save file 2. Compress it 3. See that the compressed version is about 1/10'th the size. **Expected behavior** A save file should be generated with space efficiency in mind. Also, load times will be reduced as the program will only need to read and parse less than 1/6'th the information for a reasonable compression technique. **Actual behavior** The save file stores mostly map files which themselves contain mostly things like "t_grass" or "t_dirt". I did a quick scan of how many times "t_grass" was referenced in my 40MB world, found 1,236,377 matches. **Additional information** Even just replacing the "t_grass" with something short will cut file size at the minimum 5-10MB. (my calculation shows "t_grass" takes up approx. 25% of the entire save file). With an all encompassing compression technique, it would probably change my save file from 40MB to <10MB easily. So essentially, I see a map file going from "t_grass","t_grass","t_grass",..... to "032032032" (032 being the in game debug # for "t_grass". If you want to get technical, you could use basic compression technique to replace all "032" with some single character (or two characters if there's large variety in terrain in any particular map) and map it to "032". Which becomes "0:032","000" Even further, you could just use a delimiter to know which section of the file belongs to the coordinates and terrain without specifying "coordinates:[XX,YY,ZZ]" Also, I don't know why but every map file splits a chunk into 4 smaller chunks with a whole new header with version, coordinates, turn_last_touched, etc. This could probably be cut out to make it one whole chunk (might be a little tough because that's some hefty code).
non_process
save files are massive compared to compressed versions describe the problem save files get massive quickly upwards of in some cases to reproduce make a save file compress it see that the compressed version is about th the size expected behavior a save file should be generated with space efficiency in mind also load times will be reduced as the program will only need to read and parse less than th the information for a reasonable compression technique actual behavior the save file stores mostly map files which themselves contain mostly things like t grass or t dirt i did a quick scan of how many times t grass was referenced in my world found matches additional information even just replacing the t grass with something short will cut file size at the minimum my calculation shows t grass takes up approx of the entire save file with an all encompassing compression technique it would probably change my save file from to easily so essentially i see a map file going from t grass t grass t grass to being the in game debug for t grass if you want to get technical you could use basic compression technique to replace all with some single character or two characters if there s large variety in terrain in any particular map and map it to which becomes even further you could just use a delimiter to know which section of the file belongs to the coordinates and terrain without specifying coordinates also i don t know why but every map file splits a chunk into smaller chunks with a whole new header with version coordinates turn last touched etc this could probably be cut out to make it one whole chunk might be a little tough because that s some hefty code
0
20,556
27,213,071,775
IssuesEvent
2023-02-20 18:20:58
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
Convert the python list to the numpy array for the python embedding at the base class.
type: enhancement priority: medium component: CI/CD reporting: DTC NCAR Base requestor: METplus Team MET: PreProcessing Tools (Point)
The python list and the numpy array are accepted for the python embedding. The current implementation gives a INFO message if the python list contains numpy data type members. - no info message with 1) python list & python general numeric data type members and 2) numpy array & numpy data type members - info message with the python list & the numpy data type members ``` ==INFO_PYTHON== Recommend using numpy instead of python list for obs_hgt (<class 'numpy.float32'>) ==INFO_PYTHON== Recommend using numpy instead of python list for obs_val (<class 'numpy.float32'>) ``` ## Describe the Enhancement ## MET codes handle better with the numpy array than the python list for the python embedding. It's better to convert the python list to numpy array instead of giving a INFO message at the base python class (met_point_obs.py). ### Time Estimate ### *Estimate the amount of work required here.* 4 hours ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* 2702691 ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Convert the python list to the numpy array for the python embedding at the base class. - The python list and the numpy array are accepted for the python embedding. The current implementation gives a INFO message if the python list contains numpy data type members. - no info message with 1) python list & python general numeric data type members and 2) numpy array & numpy data type members - info message with the python list & the numpy data type members ``` ==INFO_PYTHON== Recommend using numpy instead of python list for obs_hgt (<class 'numpy.float32'>) ==INFO_PYTHON== Recommend using numpy instead of python list for obs_val (<class 'numpy.float32'>) ``` ## Describe the Enhancement ## MET codes handle better with the numpy array than the python list for the python embedding. It's better to convert the python list to numpy array instead of giving a INFO message at the base python class (met_point_obs.py). ### Time Estimate ### *Estimate the amount of work required here.* 4 hours ### Sub-Issues ### Consider breaking the enhancement down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* 2702691 ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** - [ ] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Enhancement Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
process
convert the python list to the numpy array for the python embedding at the base class the python list and the numpy array are accepted for the python embedding the current implementation gives a info message if the python list contains numpy data type members no info message with python list python general numeric data type members and numpy array numpy data type members info message with the python list the numpy data type members info python recommend using numpy instead of python list for obs hgt info python recommend using numpy instead of python list for obs val describe the enhancement met codes handle better with the numpy array than the python list for the python embedding it s better to convert the python list to numpy array instead of giving a info message at the base python class met point obs py time estimate estimate the amount of work required here hours sub issues consider breaking the enhancement down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and development issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
12,660
15,032,075,121
IssuesEvent
2021-02-02 09:44:02
timberio/vector
https://api.github.com/repos/timberio/vector
closed
deprecate/remove `check_fields` condition variant
domain: processing transform: filter transform: reduce transform: route type: enhancement
In #4743 a new `remap` condition was introduced, allowing four different condition variants: * `check_fields` * `is_log` * `is_metric` * `remap` Since you can write any condition you could write with `check_fields` using `remap`, and the latter is more expressive, allows more complicated checks, and doesn't involve writing sometimes awkward TOML syntax, the idea is to deprecate `check_fields`. ## Next Steps If we want to move forward with this, there are four steps to consider: 1. Emit a warning when people use `check_fields` (since this variant is the default, it would also warn if they don't define any specific condition variant) 2. Remove mention of `check_fields` in documentation (or discourage usage through added documentation). 3. Swap the default condition variant from `check_fields` to `remap`. 4. Remove all code related to `check_fields`, making it so that people can no longer use this condition variant at all. **Step 1** is easy to introduce, but will introduce quite some noise for many people, given the ubiquity with which this variant is used both in transforms and Vector unit tests. **Step 2** seems fairly straightforward, although simply removing mention of it might confuse people who still use the syntax when writing Vector unit tests for example. **Step 3** is backward-breaking (see **problem** section below), and can potentially hit many people (both for transform configurations, and Vector unit tests), but does have an easy work-around by adding an explicit `type = "check_fields"` to the conditions in the Vector config file. **Step 4** seems like something we only want to do close to a 1.0 release, as it'll prevent people from just slapping on a `type = "check_fields"` in their config to work around the swapped default introduced in step 2. ## Problem `check_fields` is the default condition variant if an explicit `type` isn't defined, which means that you can currently write: ```toml [transforms.reduce] type = "reduce" ends_when."message.eq" = "done" ``` If we flipped the default, all existing configs using these conditionals would fail, since the `remap` variant expects a `source` field to exist. It is also heavily used in unit tests that people (and we ourselves) write for their Vector configurations. ## Solution If we want to avoid too much breakage, but still swap the default for people, we could write up some kind of way to auto-detect which default to pick: Given these examples: ```toml [transforms.reduce_1] type = "reduce" ends_when."message.eq" = "done" [transforms.reduce_2] type = "reduce" ends_when.source = """ .message == "done" """ ``` We could check if the condition map (in this case `ends_when`) contains a single field, and its field is named `source`. If it is, we set the type to `remap`, otherwise we keep the default `check_fields`. Since `check_type` fields always have a predicate attached to them, there's no risk of someone writing `ends_when.source`, and expecting it to default to `check_fields`. Of course, this assumes we really care about this being a breaking change. If we don't, we can just document this, flip the switch, and ask people to update their configs. We could even introduce a `check_field` function in remap, to allow people to quickly copy/paste their existing checks into remap: ```js check_field("message.eq", "done") ``` ## Solution, Act 2 Even if we swap the default, there's still a question of syntax. Right now, you would write the following: ```toml [transforms.reduce] type = "reduce" ends_when.type = "remap" ends_when.source = """ .message == "done" """ ``` Making `remap` the default feels a bit awkward, as you'd have to write `ends_when.source`: ```toml [transforms.reduce] type = "reduce" ends_when.source = """ .message == "done" """ ``` We _could_ make conditions either a string or a map, in which case a string would mean it's the source of the `remap` condition variant, and a map means it's either the default variant, or an explicitly enabled variant: ```toml [transforms.reduce] type = "reduce" # string, default to `remap`, and use value as remap source ends_when = ".foo == true" # map, default to whatever variant we've set as default (currently `check_fields`, later potentially `remap`) ends_when.source = ".foo == true" # map, with an explicit variant set ends_when.type = "check_fields" ends_when."foo.eq" = true ```
1.0
deprecate/remove `check_fields` condition variant - In #4743 a new `remap` condition was introduced, allowing four different condition variants: * `check_fields` * `is_log` * `is_metric` * `remap` Since you can write any condition you could write with `check_fields` using `remap`, and the latter is more expressive, allows more complicated checks, and doesn't involve writing sometimes awkward TOML syntax, the idea is to deprecate `check_fields`. ## Next Steps If we want to move forward with this, there are four steps to consider: 1. Emit a warning when people use `check_fields` (since this variant is the default, it would also warn if they don't define any specific condition variant) 2. Remove mention of `check_fields` in documentation (or discourage usage through added documentation). 3. Swap the default condition variant from `check_fields` to `remap`. 4. Remove all code related to `check_fields`, making it so that people can no longer use this condition variant at all. **Step 1** is easy to introduce, but will introduce quite some noise for many people, given the ubiquity with which this variant is used both in transforms and Vector unit tests. **Step 2** seems fairly straightforward, although simply removing mention of it might confuse people who still use the syntax when writing Vector unit tests for example. **Step 3** is backward-breaking (see **problem** section below), and can potentially hit many people (both for transform configurations, and Vector unit tests), but does have an easy work-around by adding an explicit `type = "check_fields"` to the conditions in the Vector config file. **Step 4** seems like something we only want to do close to a 1.0 release, as it'll prevent people from just slapping on a `type = "check_fields"` in their config to work around the swapped default introduced in step 2. ## Problem `check_fields` is the default condition variant if an explicit `type` isn't defined, which means that you can currently write: ```toml [transforms.reduce] type = "reduce" ends_when."message.eq" = "done" ``` If we flipped the default, all existing configs using these conditionals would fail, since the `remap` variant expects a `source` field to exist. It is also heavily used in unit tests that people (and we ourselves) write for their Vector configurations. ## Solution If we want to avoid too much breakage, but still swap the default for people, we could write up some kind of way to auto-detect which default to pick: Given these examples: ```toml [transforms.reduce_1] type = "reduce" ends_when."message.eq" = "done" [transforms.reduce_2] type = "reduce" ends_when.source = """ .message == "done" """ ``` We could check if the condition map (in this case `ends_when`) contains a single field, and its field is named `source`. If it is, we set the type to `remap`, otherwise we keep the default `check_fields`. Since `check_type` fields always have a predicate attached to them, there's no risk of someone writing `ends_when.source`, and expecting it to default to `check_fields`. Of course, this assumes we really care about this being a breaking change. If we don't, we can just document this, flip the switch, and ask people to update their configs. We could even introduce a `check_field` function in remap, to allow people to quickly copy/paste their existing checks into remap: ```js check_field("message.eq", "done") ``` ## Solution, Act 2 Even if we swap the default, there's still a question of syntax. Right now, you would write the following: ```toml [transforms.reduce] type = "reduce" ends_when.type = "remap" ends_when.source = """ .message == "done" """ ``` Making `remap` the default feels a bit awkward, as you'd have to write `ends_when.source`: ```toml [transforms.reduce] type = "reduce" ends_when.source = """ .message == "done" """ ``` We _could_ make conditions either a string or a map, in which case a string would mean it's the source of the `remap` condition variant, and a map means it's either the default variant, or an explicitly enabled variant: ```toml [transforms.reduce] type = "reduce" # string, default to `remap`, and use value as remap source ends_when = ".foo == true" # map, default to whatever variant we've set as default (currently `check_fields`, later potentially `remap`) ends_when.source = ".foo == true" # map, with an explicit variant set ends_when.type = "check_fields" ends_when."foo.eq" = true ```
process
deprecate remove check fields condition variant in a new remap condition was introduced allowing four different condition variants check fields is log is metric remap since you can write any condition you could write with check fields using remap and the latter is more expressive allows more complicated checks and doesn t involve writing sometimes awkward toml syntax the idea is to deprecate check fields next steps if we want to move forward with this there are four steps to consider emit a warning when people use check fields since this variant is the default it would also warn if they don t define any specific condition variant remove mention of check fields in documentation or discourage usage through added documentation swap the default condition variant from check fields to remap remove all code related to check fields making it so that people can no longer use this condition variant at all step is easy to introduce but will introduce quite some noise for many people given the ubiquity with which this variant is used both in transforms and vector unit tests step seems fairly straightforward although simply removing mention of it might confuse people who still use the syntax when writing vector unit tests for example step is backward breaking see problem section below and can potentially hit many people both for transform configurations and vector unit tests but does have an easy work around by adding an explicit type check fields to the conditions in the vector config file step seems like something we only want to do close to a release as it ll prevent people from just slapping on a type check fields in their config to work around the swapped default introduced in step problem check fields is the default condition variant if an explicit type isn t defined which means that you can currently write toml type reduce ends when message eq done if we flipped the default all existing configs using these conditionals would fail since the remap variant expects a source field to exist it is also heavily used in unit tests that people and we ourselves write for their vector configurations solution if we want to avoid too much breakage but still swap the default for people we could write up some kind of way to auto detect which default to pick given these examples toml type reduce ends when message eq done type reduce ends when source message done we could check if the condition map in this case ends when contains a single field and its field is named source if it is we set the type to remap otherwise we keep the default check fields since check type fields always have a predicate attached to them there s no risk of someone writing ends when source and expecting it to default to check fields of course this assumes we really care about this being a breaking change if we don t we can just document this flip the switch and ask people to update their configs we could even introduce a check field function in remap to allow people to quickly copy paste their existing checks into remap js check field message eq done solution act even if we swap the default there s still a question of syntax right now you would write the following toml type reduce ends when type remap ends when source message done making remap the default feels a bit awkward as you d have to write ends when source toml type reduce ends when source message done we could make conditions either a string or a map in which case a string would mean it s the source of the remap condition variant and a map means it s either the default variant or an explicitly enabled variant toml type reduce string default to remap and use value as remap source ends when foo true map default to whatever variant we ve set as default currently check fields later potentially remap ends when source foo true map with an explicit variant set ends when type check fields ends when foo eq true
1
2,767
5,703,576,896
IssuesEvent
2017-04-18 00:27:06
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
List all requests hitting a specific page
duplicate enhancement log-processing
### Problem & Feature Request I'd like to see all traffic hitting a specific url (or page, file, address; whatever you call it) in a list, so I can know who visited the specific page, or practically, did somebody visit the page that I told him to ;-) I mean bot's are annoying AF and making noise, while I just want to see valuable statistics to pages that I actually made for people to visit (e.g. interest analysis, etc). You do have a list of URLs that are being visited, but I'd like to be able to click into the url listing, and see the actual requests w/ ip addresses. So yeah, that's basically my problem. ### Practical solution for now, before the feature is available #### `grep` pre-filtering That's kindda obvious, use grep to filter out requests hitting the url, then hand it over to GoAccess; ***But*** I'm having problems with that -- I'm not an advanced linux user...and I've no clue how to do "tripple piping"...here's a practical example: ``` zcat access.log.*.gz | goaccess access.log access.log.1 - -a -o full-report.html ``` That's the command I currently use to generate report; But adding grep? ``` grep -i 'interesting_page' zcat access.log.*.gz | goaccess access.log access.log.1 - -a -o intro-report.html ``` That didn't work :-(
1.0
List all requests hitting a specific page - ### Problem & Feature Request I'd like to see all traffic hitting a specific url (or page, file, address; whatever you call it) in a list, so I can know who visited the specific page, or practically, did somebody visit the page that I told him to ;-) I mean bot's are annoying AF and making noise, while I just want to see valuable statistics to pages that I actually made for people to visit (e.g. interest analysis, etc). You do have a list of URLs that are being visited, but I'd like to be able to click into the url listing, and see the actual requests w/ ip addresses. So yeah, that's basically my problem. ### Practical solution for now, before the feature is available #### `grep` pre-filtering That's kindda obvious, use grep to filter out requests hitting the url, then hand it over to GoAccess; ***But*** I'm having problems with that -- I'm not an advanced linux user...and I've no clue how to do "tripple piping"...here's a practical example: ``` zcat access.log.*.gz | goaccess access.log access.log.1 - -a -o full-report.html ``` That's the command I currently use to generate report; But adding grep? ``` grep -i 'interesting_page' zcat access.log.*.gz | goaccess access.log access.log.1 - -a -o intro-report.html ``` That didn't work :-(
process
list all requests hitting a specific page problem feature request i d like to see all traffic hitting a specific url or page file address whatever you call it in a list so i can know who visited the specific page or practically did somebody visit the page that i told him to i mean bot s are annoying af and making noise while i just want to see valuable statistics to pages that i actually made for people to visit e g interest analysis etc you do have a list of urls that are being visited but i d like to be able to click into the url listing and see the actual requests w ip addresses so yeah that s basically my problem practical solution for now before the feature is available grep pre filtering that s kindda obvious use grep to filter out requests hitting the url then hand it over to goaccess but i m having problems with that i m not an advanced linux user and i ve no clue how to do tripple piping here s a practical example zcat access log gz goaccess access log access log a o full report html that s the command i currently use to generate report but adding grep grep i interesting page zcat access log gz goaccess access log access log a o intro report html that didn t work
1
8,867
11,963,455,930
IssuesEvent
2020-04-05 15:57:31
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
Validation of `CalcJob` inputs should not be performed in `CalcJobNode._validate`
priority/nice-to-have topic/calc-jobs topic/processes type/enhancement type/refactoring
For historical reasons, part of the validation of `CalcJob` inputs is still performed in `CalcJobNode._validate`, which is called in the `store` method of the node. However, this validation should really be done by the `ProcessSpec`. For this to be possible, the validator signature as defined by `plumpy` needs to be adapted to also provide a reference of the process instance and not just the port value as is currently the case. Just like in `click` validators and callbacks, where the signature is `ctx, param, value` such that the validator has access to the context.
1.0
Validation of `CalcJob` inputs should not be performed in `CalcJobNode._validate` - For historical reasons, part of the validation of `CalcJob` inputs is still performed in `CalcJobNode._validate`, which is called in the `store` method of the node. However, this validation should really be done by the `ProcessSpec`. For this to be possible, the validator signature as defined by `plumpy` needs to be adapted to also provide a reference of the process instance and not just the port value as is currently the case. Just like in `click` validators and callbacks, where the signature is `ctx, param, value` such that the validator has access to the context.
process
validation of calcjob inputs should not be performed in calcjobnode validate for historical reasons part of the validation of calcjob inputs is still performed in calcjobnode validate which is called in the store method of the node however this validation should really be done by the processspec for this to be possible the validator signature as defined by plumpy needs to be adapted to also provide a reference of the process instance and not just the port value as is currently the case just like in click validators and callbacks where the signature is ctx param value such that the validator has access to the context
1
197
2,603,049,421
IssuesEvent
2015-02-24 13:49:31
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
macroprocessor: add possibility to evaluate expression in @#include
enhancement preprocessor
Currently, ```@#include``` expects a quoted string. Allow the user to also pass a macro expression or at least a macro value to the ```@#include``` expression. Motivated by the following forum question: http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=6532&p=18600
1.0
macroprocessor: add possibility to evaluate expression in @#include - Currently, ```@#include``` expects a quoted string. Allow the user to also pass a macro expression or at least a macro value to the ```@#include``` expression. Motivated by the following forum question: http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=6532&p=18600
process
macroprocessor add possibility to evaluate expression in include currently include expects a quoted string allow the user to also pass a macro expression or at least a macro value to the include expression motivated by the following forum question
1
390,012
26,843,366,365
IssuesEvent
2023-02-03 03:43:09
RalphHightower/RalphHightower
https://api.github.com/repos/RalphHightower/RalphHightower
opened
MurdaughAlex_TimelineWeek2JudgementDay: 2023-02-02 updates
documentation
**What page should this be added to?**<br> MurdaughAlex_TimelineWeek2JudgementDay **What section/heading should this be added to?**<br> Chronological **Include the Markdown text that is to be added below:**<br> | **February 2, 2023** | | [Alex Murdaugh’s colleagues detail his alleged financial crimes to an empty jury box](https://www.postandcourier.com/murdaugh-updates/alex-murdaughs-colleagues-detail-his-alleged-financial-crimes-to-an-empty-jury-box/article_ee5fad96-a303-11ed-8e47-6bc90236d387.html) | WALTERBORO β€” The chief financial officer of Alex Murdaugh’s former law firm testified at length Feb. 2 about how she discovered the prominent Hampton attorney had secretly stolen vast sums from his legal clients and law partners over the past decade. Under questioning from a state prosecutor, Jeanne Seckinger detailed the myriad schemes Murdaugh allegedly used to pilfer nearly $9 million from those who trusted him. A dummy bank account. Fake structured settlements. Fraudulent checks and money transfers. Seckinger’s testimony is an important piece of the state’s murder case against Murdaugh in the killings of his wife and son on June 7, 2021. It remains to be seen, however, whether a Colleton County jury will ever hear about it. Seckinger delivered her testimony with the jury excused from the room. So did two other witnesses who could speak to Murdaugh’s purported financial crimes: Michael Gunn, principal of an insurance company Murdaugh is accused of impersonating to steal from his clients; and Chris Wilson, a Bamberg attorney who testified he was tricked into helping Murdaugh plunder some $792,000 from his law firm.| |Their testimony came during a special hearing β€” a sort of mock trial β€” borne out of a protracted dispute between prosecutors and defense attorneys. They have been legally jousting over whether jurors should be told about Murdaugh’s alleged decade-long spree of thefts and betrayal. The S.C. Attorney General’s Office is calling witnesses and presenting exhibits; Murdaugh’s lawyers are cross-examining them. But they are delivering their case to an audience of one β€” Judge Clifton Newman. If Newman sides with prosecutors, these witnesses will return to the stand to repeat their testimony before the jury. If Murdaugh’s team prevails, jurors might never hear from them.| |**Hounds at the door**| |Prosecutors hope to show that Murdaugh, 54, was aware his financial crimes were about to be exposed, in part because Seckinger had confronted him on the morning of June 7, 2021, about $792,000 in missing fees from a case he worked with Wilson, the Bamberg attorney. In an act of calculated desperation, prosecutors say, Murdaugh fatally shot his 52-year-old wife, Maggie, and son Paul, 22, that evening. He hoped to engender sympathy for himself and delay Seckinger’s questions, among inquiries, investigators allege. Months later, prosecutors said, Murdaugh attempted a similar scheme. Over Labor Day weekend 2021, they claim he organized a bizarre incident in which he was shot in the head and said an unknown assailant had tried to kill him.| |*β€œWhen the hounds are at the door … for Alex Murdaugh, violence happens,”* lead prosecutor Creighton Waters told the judge.| |His tactic initially worked, Waters said. Seckinger backed off. A hearing scheduled for June 10, 2021 β€” in which Murdaugh might have been forced to turn over details of his finances β€” was postponed. But it all unraveled in September of that year when Seckinger and her coworkers at the Peters, Murdaugh, Parker, Eltzroth, Detrick law firm resumed their probe and uncovered a trail of thefts. All told, Murdaugh has been charged with nearly 100 crimes in the time since.| |Murdaugh’s defense team argued Feb. 2 that the alleged financial crimes are irrelevant to the trial at hand β€” where their client faces two counts of murder. *β€œIt’s all just a theory,” *defense attorney Jim Griffin told the judge. *β€œThere’s no facts. Their theory is the best way out is for him to murder his wife and son”* and put himself in the middle of a homicide investigation? Griffin said prosecutors want testimony about the financial allegations admitted because they don’t have enough evidence to convict Murdaugh of murder. Instead, they need to smear Murdaugh as a bad guy, he has said. *β€œThey’ve got a whole lot more evidence about financial misconduct than they do about murder,”* Griffin argued. *β€œThat’s what this is all about.”* These arguments are not new. The two sides have made and repeated them in legal motions and pretrial hearings. Yet the judge has held off deciding how much β€” if any β€” of the financial evidence should be admitted. On Feb. 2, he sent the jury out of the room so he could hear a preview of that aspect of the state’s case.| !Seckinger testified that in early September, she was on the verge of discovering a scheme in which Murdaugh allegedly sent millions in client money to a personal bank account. That day, Sept. 2, 2021, Murdaugh’s paralegal moved a folder on his desk, and a check from Wilson’s law office slipped out, representing part of the fees Seckinger had been searching for. It was proof, she said, that Murdaugh had been stealing from the firm his great-grandfather founded a century earlier.| |Taking the stand for the first time, Chris Wilson said he was misled by Murdaugh, a friend since middle school, and his law school roommate. After a case they worked together, Wilson testified, Murdaugh told him he had received permission from PMPED to put his $792,000 share of the legal fees into annuities. So Wilson sent the money to Murdaugh directly in March 2021, instead of to his law firm as usual. But as the firm continued to question Murdaugh about the missing fees, Murdaugh reportedly told Wilson he wasn’t able to buy annuities after all. Wilson testified Murdaugh pledged to send the legal fees back so Wilson could send the entire $792,000 to PMPED. Murdaugh then sent Wilson $600,000, money he came up with by taking out loans, and told him the rest was on the way. Wilson said he agreed to spot Murdaugh $192,000 in the meantime.| |On the stand, Wilson said he didn’t see any *β€œred flags”* at that point. But in August, after the slayings, he was concerned Murdaugh might try to hurt himself, and he decided he needed documentation of the loan. He wrote a short promissory note on a page of lined notebook paper. Murdaugh signed it. Less than three weeks later, PMPED told Wilson that Murdaugh had stolen from the firm and from its clients. Wilson insisted that Murdaugh explain what happened. They met on the porch of Murdaugh’s parents’ home, where Murdaugh told Wilson he had been addicted to painkillers for more than 20 years and admitted to stealing money. *β€œHe said he had (expletive) a lot of people up,”* Wilson said. The two did not speak again, Wilson said.| |Murdaugh’s trial will remain bifurcated for at least another day. The financial hearing will resume at 9:30 a.m. Feb. 3, though the jury will not be present. )The judge indicated wants to hear from at least one more financial crime witness before issuing a ruling. That witness, however, isn’t available to testify until Feb. 6.| |Though the fight over financial evidence dominated the day, the jury was able to hear witnesses testify specifically about the investigation into the June 2021 slayings.| |Dylan Hightower, an investigator with the 14th Circuit Solicitor’s Office, testified about call logs he pulled separately from Murdaugh’s phone and from the telecommunications company Verizon. When comparing the two records, Hightower testified, all but two of Murdaugh’s 75 calls from the date of the slayings had been deleted from his phone. Hightower said he couldn’t say who deleted them or why. Investigators downloaded the contents of Murdaugh’s phone three days after the slayings.| |The state’s 21st and latest witness, State Law Enforcement Division agent Katie McAllister, testified she searched every room of the main home on the Murdaugh’s 1,770-acre hunting property, known as Moselle, on the day after the slayings. Under cross-examination from defense attorney Dick Harpootlian, she acknowledged she found no blood or bloody clothes in the house that would have indicated the killer cleaned themselves up there afterward.| **Describe alternatives you've considered**<br> Bookmarks in browsers are not portable. **Additional context**<br> Add any other context or screenshots about the feature request here.
1.0
MurdaughAlex_TimelineWeek2JudgementDay: 2023-02-02 updates - **What page should this be added to?**<br> MurdaughAlex_TimelineWeek2JudgementDay **What section/heading should this be added to?**<br> Chronological **Include the Markdown text that is to be added below:**<br> | **February 2, 2023** | | [Alex Murdaugh’s colleagues detail his alleged financial crimes to an empty jury box](https://www.postandcourier.com/murdaugh-updates/alex-murdaughs-colleagues-detail-his-alleged-financial-crimes-to-an-empty-jury-box/article_ee5fad96-a303-11ed-8e47-6bc90236d387.html) | WALTERBORO β€” The chief financial officer of Alex Murdaugh’s former law firm testified at length Feb. 2 about how she discovered the prominent Hampton attorney had secretly stolen vast sums from his legal clients and law partners over the past decade. Under questioning from a state prosecutor, Jeanne Seckinger detailed the myriad schemes Murdaugh allegedly used to pilfer nearly $9 million from those who trusted him. A dummy bank account. Fake structured settlements. Fraudulent checks and money transfers. Seckinger’s testimony is an important piece of the state’s murder case against Murdaugh in the killings of his wife and son on June 7, 2021. It remains to be seen, however, whether a Colleton County jury will ever hear about it. Seckinger delivered her testimony with the jury excused from the room. So did two other witnesses who could speak to Murdaugh’s purported financial crimes: Michael Gunn, principal of an insurance company Murdaugh is accused of impersonating to steal from his clients; and Chris Wilson, a Bamberg attorney who testified he was tricked into helping Murdaugh plunder some $792,000 from his law firm.| |Their testimony came during a special hearing β€” a sort of mock trial β€” borne out of a protracted dispute between prosecutors and defense attorneys. They have been legally jousting over whether jurors should be told about Murdaugh’s alleged decade-long spree of thefts and betrayal. The S.C. Attorney General’s Office is calling witnesses and presenting exhibits; Murdaugh’s lawyers are cross-examining them. But they are delivering their case to an audience of one β€” Judge Clifton Newman. If Newman sides with prosecutors, these witnesses will return to the stand to repeat their testimony before the jury. If Murdaugh’s team prevails, jurors might never hear from them.| |**Hounds at the door**| |Prosecutors hope to show that Murdaugh, 54, was aware his financial crimes were about to be exposed, in part because Seckinger had confronted him on the morning of June 7, 2021, about $792,000 in missing fees from a case he worked with Wilson, the Bamberg attorney. In an act of calculated desperation, prosecutors say, Murdaugh fatally shot his 52-year-old wife, Maggie, and son Paul, 22, that evening. He hoped to engender sympathy for himself and delay Seckinger’s questions, among inquiries, investigators allege. Months later, prosecutors said, Murdaugh attempted a similar scheme. Over Labor Day weekend 2021, they claim he organized a bizarre incident in which he was shot in the head and said an unknown assailant had tried to kill him.| |*β€œWhen the hounds are at the door … for Alex Murdaugh, violence happens,”* lead prosecutor Creighton Waters told the judge.| |His tactic initially worked, Waters said. Seckinger backed off. A hearing scheduled for June 10, 2021 β€” in which Murdaugh might have been forced to turn over details of his finances β€” was postponed. But it all unraveled in September of that year when Seckinger and her coworkers at the Peters, Murdaugh, Parker, Eltzroth, Detrick law firm resumed their probe and uncovered a trail of thefts. All told, Murdaugh has been charged with nearly 100 crimes in the time since.| |Murdaugh’s defense team argued Feb. 2 that the alleged financial crimes are irrelevant to the trial at hand β€” where their client faces two counts of murder. *β€œIt’s all just a theory,” *defense attorney Jim Griffin told the judge. *β€œThere’s no facts. Their theory is the best way out is for him to murder his wife and son”* and put himself in the middle of a homicide investigation? Griffin said prosecutors want testimony about the financial allegations admitted because they don’t have enough evidence to convict Murdaugh of murder. Instead, they need to smear Murdaugh as a bad guy, he has said. *β€œThey’ve got a whole lot more evidence about financial misconduct than they do about murder,”* Griffin argued. *β€œThat’s what this is all about.”* These arguments are not new. The two sides have made and repeated them in legal motions and pretrial hearings. Yet the judge has held off deciding how much β€” if any β€” of the financial evidence should be admitted. On Feb. 2, he sent the jury out of the room so he could hear a preview of that aspect of the state’s case.| !Seckinger testified that in early September, she was on the verge of discovering a scheme in which Murdaugh allegedly sent millions in client money to a personal bank account. That day, Sept. 2, 2021, Murdaugh’s paralegal moved a folder on his desk, and a check from Wilson’s law office slipped out, representing part of the fees Seckinger had been searching for. It was proof, she said, that Murdaugh had been stealing from the firm his great-grandfather founded a century earlier.| |Taking the stand for the first time, Chris Wilson said he was misled by Murdaugh, a friend since middle school, and his law school roommate. After a case they worked together, Wilson testified, Murdaugh told him he had received permission from PMPED to put his $792,000 share of the legal fees into annuities. So Wilson sent the money to Murdaugh directly in March 2021, instead of to his law firm as usual. But as the firm continued to question Murdaugh about the missing fees, Murdaugh reportedly told Wilson he wasn’t able to buy annuities after all. Wilson testified Murdaugh pledged to send the legal fees back so Wilson could send the entire $792,000 to PMPED. Murdaugh then sent Wilson $600,000, money he came up with by taking out loans, and told him the rest was on the way. Wilson said he agreed to spot Murdaugh $192,000 in the meantime.| |On the stand, Wilson said he didn’t see any *β€œred flags”* at that point. But in August, after the slayings, he was concerned Murdaugh might try to hurt himself, and he decided he needed documentation of the loan. He wrote a short promissory note on a page of lined notebook paper. Murdaugh signed it. Less than three weeks later, PMPED told Wilson that Murdaugh had stolen from the firm and from its clients. Wilson insisted that Murdaugh explain what happened. They met on the porch of Murdaugh’s parents’ home, where Murdaugh told Wilson he had been addicted to painkillers for more than 20 years and admitted to stealing money. *β€œHe said he had (expletive) a lot of people up,”* Wilson said. The two did not speak again, Wilson said.| |Murdaugh’s trial will remain bifurcated for at least another day. The financial hearing will resume at 9:30 a.m. Feb. 3, though the jury will not be present. )The judge indicated wants to hear from at least one more financial crime witness before issuing a ruling. That witness, however, isn’t available to testify until Feb. 6.| |Though the fight over financial evidence dominated the day, the jury was able to hear witnesses testify specifically about the investigation into the June 2021 slayings.| |Dylan Hightower, an investigator with the 14th Circuit Solicitor’s Office, testified about call logs he pulled separately from Murdaugh’s phone and from the telecommunications company Verizon. When comparing the two records, Hightower testified, all but two of Murdaugh’s 75 calls from the date of the slayings had been deleted from his phone. Hightower said he couldn’t say who deleted them or why. Investigators downloaded the contents of Murdaugh’s phone three days after the slayings.| |The state’s 21st and latest witness, State Law Enforcement Division agent Katie McAllister, testified she searched every room of the main home on the Murdaugh’s 1,770-acre hunting property, known as Moselle, on the day after the slayings. Under cross-examination from defense attorney Dick Harpootlian, she acknowledged she found no blood or bloody clothes in the house that would have indicated the killer cleaned themselves up there afterward.| **Describe alternatives you've considered**<br> Bookmarks in browsers are not portable. **Additional context**<br> Add any other context or screenshots about the feature request here.
non_process
murdaughalex updates what page should this be added to murdaughalex what section heading should this be added to chronological include the markdown text that is to be added below february walterboro β€” the chief financial officer of alex murdaugh’s former law firm testified at length feb about how she discovered the prominent hampton attorney had secretly stolen vast sums from his legal clients and law partners over the past decade under questioning from a state prosecutor jeanne seckinger detailed the myriad schemes murdaugh allegedly used to pilfer nearly million from those who trusted him a dummy bank account fake structured settlements fraudulent checks and money transfers seckinger’s testimony is an important piece of the state’s murder case against murdaugh in the killings of his wife and son on june it remains to be seen however whether a colleton county jury will ever hear about it seckinger delivered her testimony with the jury excused from the room so did two other witnesses who could speak to murdaugh’s purported financial crimes michael gunn principal of an insurance company murdaugh is accused of impersonating to steal from his clients and chris wilson a bamberg attorney who testified he was tricked into helping murdaugh plunder some from his law firm their testimony came during a special hearing β€” a sort of mock trial β€” borne out of a protracted dispute between prosecutors and defense attorneys they have been legally jousting over whether jurors should be told about murdaugh’s alleged decade long spree of thefts and betrayal the s c attorney general’s office is calling witnesses and presenting exhibits murdaugh’s lawyers are cross examining them but they are delivering their case to an audience of one β€” judge clifton newman if newman sides with prosecutors these witnesses will return to the stand to repeat their testimony before the jury if murdaugh’s team prevails jurors might never hear from them hounds at the door prosecutors hope to show that murdaugh was aware his financial crimes were about to be exposed in part because seckinger had confronted him on the morning of june about in missing fees from a case he worked with wilson the bamberg attorney in an act of calculated desperation prosecutors say murdaugh fatally shot his year old wife maggie and son paul that evening he hoped to engender sympathy for himself and delay seckinger’s questions among inquiries investigators allege months later prosecutors said murdaugh attempted a similar scheme over labor day weekend they claim he organized a bizarre incident in which he was shot in the head and said an unknown assailant had tried to kill him β€œwhen the hounds are at the door … for alex murdaugh violence happens ” lead prosecutor creighton waters told the judge his tactic initially worked waters said seckinger backed off a hearing scheduled for june β€” in which murdaugh might have been forced to turn over details of his finances β€” was postponed but it all unraveled in september of that year when seckinger and her coworkers at the peters murdaugh parker eltzroth detrick law firm resumed their probe and uncovered a trail of thefts all told murdaugh has been charged with nearly crimes in the time since murdaugh’s defense team argued feb that the alleged financial crimes are irrelevant to the trial at hand β€” where their client faces two counts of murder β€œit’s all just a theory ” defense attorney jim griffin told the judge β€œthere’s no facts their theory is the best way out is for him to murder his wife and son” and put himself in the middle of a homicide investigation griffin said prosecutors want testimony about the financial allegations admitted because they don’t have enough evidence to convict murdaugh of murder instead they need to smear murdaugh as a bad guy he has said β€œthey’ve got a whole lot more evidence about financial misconduct than they do about murder ” griffin argued β€œthat’s what this is all about ” these arguments are not new the two sides have made and repeated them in legal motions and pretrial hearings yet the judge has held off deciding how much β€” if any β€” of the financial evidence should be admitted on feb he sent the jury out of the room so he could hear a preview of that aspect of the state’s case seckinger testified that in early september she was on the verge of discovering a scheme in which murdaugh allegedly sent millions in client money to a personal bank account that day sept murdaugh’s paralegal moved a folder on his desk and a check from wilson’s law office slipped out representing part of the fees seckinger had been searching for it was proof she said that murdaugh had been stealing from the firm his great grandfather founded a century earlier taking the stand for the first time chris wilson said he was misled by murdaugh a friend since middle school and his law school roommate after a case they worked together wilson testified murdaugh told him he had received permission from pmped to put his share of the legal fees into annuities so wilson sent the money to murdaugh directly in march instead of to his law firm as usual but as the firm continued to question murdaugh about the missing fees murdaugh reportedly told wilson he wasn’t able to buy annuities after all wilson testified murdaugh pledged to send the legal fees back so wilson could send the entire to pmped murdaugh then sent wilson money he came up with by taking out loans and told him the rest was on the way wilson said he agreed to spot murdaugh in the meantime on the stand wilson said he didn’t see any β€œred flags” at that point but in august after the slayings he was concerned murdaugh might try to hurt himself and he decided he needed documentation of the loan he wrote a short promissory note on a page of lined notebook paper murdaugh signed it less than three weeks later pmped told wilson that murdaugh had stolen from the firm and from its clients wilson insisted that murdaugh explain what happened they met on the porch of murdaugh’s parents’ home where murdaugh told wilson he had been addicted to painkillers for more than years and admitted to stealing money β€œhe said he had expletive a lot of people up ” wilson said the two did not speak again wilson said murdaugh’s trial will remain bifurcated for at least another day the financial hearing will resume at a m feb though the jury will not be present the judge indicated wants to hear from at least one more financial crime witness before issuing a ruling that witness however isn’t available to testify until feb though the fight over financial evidence dominated the day the jury was able to hear witnesses testify specifically about the investigation into the june slayings dylan hightower an investigator with the circuit solicitor’s office testified about call logs he pulled separately from murdaugh’s phone and from the telecommunications company verizon when comparing the two records hightower testified all but two of murdaugh’s calls from the date of the slayings had been deleted from his phone hightower said he couldn’t say who deleted them or why investigators downloaded the contents of murdaugh’s phone three days after the slayings the state’s and latest witness state law enforcement division agent katie mcallister testified she searched every room of the main home on the murdaugh’s acre hunting property known as moselle on the day after the slayings under cross examination from defense attorney dick harpootlian she acknowledged she found no blood or bloody clothes in the house that would have indicated the killer cleaned themselves up there afterward describe alternatives you ve considered bookmarks in browsers are not portable additional context add any other context or screenshots about the feature request here
0
733,532
25,310,332,329
IssuesEvent
2022-11-17 16:57:18
COPRS/rs-issues
https://api.github.com/repos/COPRS/rs-issues
closed
[BUG][UWC] Remove ProductionType from filter
bug WERUM dev hmi CCB priority:minor Reconsolidation ops
**Environment:** - Delivery tag: v1.1.0 - Platform: OPS Orange Cloud - Configuration: OPS (UWC 1.4.0 - factory configuration) **Test:** - Name: COPRS-RP-ADST-001178588 - 2.1 - 071/1 (FUN) - TraΓ§ability (requirements): DDIP ICD ESA-EOPG-EOPGC-IF-4 **Current Behavior:** Using the filter of UWC, the query on ProductionType raised an error. (identical at #609) > OK (Status: 500) > Http failure response for https://processing.platform.ops-csc.com/ddip/odata/v1/Products?$format=json&$count=true&$top=15&$filter=contains(Name,%27S1A%27)%20and%20startswith(ProductionType,%27A%27): 500 OK This is a regression from V1. **Expected Behavior:** `ProductionType` shall be removed from filter list" Note that is also related to the bug #609. **Steps To Reproduce:** Follow test steps (V1.0): 1. Log in User-Web-Client GUI - > The home page appears. 2. Start a filter research: carry out the following research: contains(Name, 'S1A) -> Some results appears 3. Add a new filter -> The result has been changed with new filter (contains(Name,'S1A') and startswith(ProductionType,'A')) **Test execution artefacts (i.e. logs, screenshots…)** Filters panel: ![image.png](https://images.zenhubusercontent.com/618e932533b15808a281c31c/15ecd53b-119a-4c6a-a920-435899b102c6) Result panel: ![image.png](https://images.zenhubusercontent.com/618e932533b15808a281c31c/76f11ca3-a2ca-4796-bef5-d2d6f0b5db2f) Note: The test case also rises the bug : COPRS/rs-issues#522 [BUG][UWC] The error message comes out the dedicated box **Whenever possible, first analysis of the root cause** <!-- A concise description of the first analysis. --> **Bug Generic Definition of Ready (DoR)** - [X] The affect version in which the bug has been found is mentioned - [X] The context and environment of the bug is detailed - [X] The description of the bug is clear and unambiguous - [X] The procedure (steps) to reproduce the bug is clearly detailed - [X] The failed tests is linked to the bug : failed result % expected result - [ ] The tested User Story / features is linked to the bug - [X] Logs are attached if available - [ ] A data set attached if available - [X] Category label is link to the bug <!-- infra, mon, pro, perfo, hmi, secu --> **Bug Generic Definition of Done (DoD)** - [ ] the modification implemented (the solution to fix the bug) is described in the bug. - [ ] Unit tests & Continuous integration performed - Test results available - Structural Test coverage reported by SONAR - [ ] Code committed in GIT with right tag or Analysis/Trade Off documentation up-to-date in reference-system-documentation repository - [ ] Code is compliant with coding rules (SONAR Report as evidence) - [ ] Acceptance criteria of the related User story are checked and Passed
1.0
[BUG][UWC] Remove ProductionType from filter - **Environment:** - Delivery tag: v1.1.0 - Platform: OPS Orange Cloud - Configuration: OPS (UWC 1.4.0 - factory configuration) **Test:** - Name: COPRS-RP-ADST-001178588 - 2.1 - 071/1 (FUN) - TraΓ§ability (requirements): DDIP ICD ESA-EOPG-EOPGC-IF-4 **Current Behavior:** Using the filter of UWC, the query on ProductionType raised an error. (identical at #609) > OK (Status: 500) > Http failure response for https://processing.platform.ops-csc.com/ddip/odata/v1/Products?$format=json&$count=true&$top=15&$filter=contains(Name,%27S1A%27)%20and%20startswith(ProductionType,%27A%27): 500 OK This is a regression from V1. **Expected Behavior:** `ProductionType` shall be removed from filter list" Note that is also related to the bug #609. **Steps To Reproduce:** Follow test steps (V1.0): 1. Log in User-Web-Client GUI - > The home page appears. 2. Start a filter research: carry out the following research: contains(Name, 'S1A) -> Some results appears 3. Add a new filter -> The result has been changed with new filter (contains(Name,'S1A') and startswith(ProductionType,'A')) **Test execution artefacts (i.e. logs, screenshots…)** Filters panel: ![image.png](https://images.zenhubusercontent.com/618e932533b15808a281c31c/15ecd53b-119a-4c6a-a920-435899b102c6) Result panel: ![image.png](https://images.zenhubusercontent.com/618e932533b15808a281c31c/76f11ca3-a2ca-4796-bef5-d2d6f0b5db2f) Note: The test case also rises the bug : COPRS/rs-issues#522 [BUG][UWC] The error message comes out the dedicated box **Whenever possible, first analysis of the root cause** <!-- A concise description of the first analysis. --> **Bug Generic Definition of Ready (DoR)** - [X] The affect version in which the bug has been found is mentioned - [X] The context and environment of the bug is detailed - [X] The description of the bug is clear and unambiguous - [X] The procedure (steps) to reproduce the bug is clearly detailed - [X] The failed tests is linked to the bug : failed result % expected result - [ ] The tested User Story / features is linked to the bug - [X] Logs are attached if available - [ ] A data set attached if available - [X] Category label is link to the bug <!-- infra, mon, pro, perfo, hmi, secu --> **Bug Generic Definition of Done (DoD)** - [ ] the modification implemented (the solution to fix the bug) is described in the bug. - [ ] Unit tests & Continuous integration performed - Test results available - Structural Test coverage reported by SONAR - [ ] Code committed in GIT with right tag or Analysis/Trade Off documentation up-to-date in reference-system-documentation repository - [ ] Code is compliant with coding rules (SONAR Report as evidence) - [ ] Acceptance criteria of the related User story are checked and Passed
non_process
remove productiontype from filter environment delivery tag platform ops orange cloud configuration ops uwc factory configuration test name coprs rp adst fun traΓ§ability requirements ddip icd esa eopg eopgc if current behavior using the filter of uwc the query on productiontype raised an error identical at ok status http failure response for ok this is a regression from expected behavior productiontype shall be removed from filter list note that is also related to the bug steps to reproduce follow test steps log in user web client gui the home page appears start a filter research carry out the following research contains name some results appears add a new filter the result has been changed with new filter contains name and startswith productiontype a test execution artefacts i e logs screenshots… filters panel result panel note the test case also rises the bug coprs rs issues the error message comes out the dedicated box whenever possible first analysis of the root cause bug generic definition of ready dor the affect version in which the bug has been found is mentioned the context and environment of the bug is detailed the description of the bug is clear and unambiguous the procedure steps to reproduce the bug is clearly detailed the failed tests is linked to the bug failed result expected result the tested user story features is linked to the bug logs are attached if available a data set attached if available category label is link to the bug bug generic definition of done dod the modification implemented the solution to fix the bug is described in the bug unit tests continuous integration performed test results available structural test coverage reported by sonar code committed in git with right tag or analysis trade off documentation up to date in reference system documentation repository code is compliant with coding rules sonar report as evidence acceptance criteria of the related user story are checked and passed
0
4,216
2,838,747,976
IssuesEvent
2015-05-27 09:36:43
spring-projects/spring-boot
https://api.github.com/repos/spring-projects/spring-boot
closed
spring.jmx.default-domain and endpoints.jmx.domain not properly documented
documentation
Aside from the inconsistent naming, there is a problem with the `spring.jmx.*` configuration not being documented (it's not using `@ConfigurationProperties` either).
1.0
spring.jmx.default-domain and endpoints.jmx.domain not properly documented - Aside from the inconsistent naming, there is a problem with the `spring.jmx.*` configuration not being documented (it's not using `@ConfigurationProperties` either).
non_process
spring jmx default domain and endpoints jmx domain not properly documented aside from the inconsistent naming there is a problem with the spring jmx configuration not being documented it s not using configurationproperties either
0
624,141
19,687,507,421
IssuesEvent
2022-01-12 00:38:56
flux-framework/flux-accounting
https://api.github.com/repos/flux-framework/flux-accounting
closed
edit-user: edit arguments to subcommand
improvement medium priority
While working on #176, a suggestion was made to keep the same arguments as the `add` subcommand instead of using the `--field` and `--new-value` optional args. The same improvement should probably also made to the `edit-user` subcommand. This will allow for multiple fields to be edited at the same time, allow for resets on optional fields, and will improve the overall usability of the command.
1.0
edit-user: edit arguments to subcommand - While working on #176, a suggestion was made to keep the same arguments as the `add` subcommand instead of using the `--field` and `--new-value` optional args. The same improvement should probably also made to the `edit-user` subcommand. This will allow for multiple fields to be edited at the same time, allow for resets on optional fields, and will improve the overall usability of the command.
non_process
edit user edit arguments to subcommand while working on a suggestion was made to keep the same arguments as the add subcommand instead of using the field and new value optional args the same improvement should probably also made to the edit user subcommand this will allow for multiple fields to be edited at the same time allow for resets on optional fields and will improve the overall usability of the command
0
281,552
30,888,891,122
IssuesEvent
2023-08-04 01:58:32
nidhi7598/linux-4.1.15_CVE-2019-10220
https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2019-10220
reopened
CVE-2018-10675 (High) detected in linuxlinux-4.4.302
Mend: dependency security vulnerability
## CVE-2018-10675 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.1.15_CVE-2019-10220/commit/6a0d304d962ca933d73f507ce02157ef2791851c">6a0d304d962ca933d73f507ce02157ef2791851c</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mempolicy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The do_get_mempolicy function in mm/mempolicy.c in the Linux kernel before 4.12.9 allows local users to cause a denial of service (use-after-free) or possibly have unspecified other impact via crafted system calls. <p>Publish Date: 2018-05-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10675>CVE-2018-10675</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10675">https://nvd.nist.gov/vuln/detail/CVE-2018-10675</a></p> <p>Release Date: 2018-05-02</p> <p>Fix Resolution: 4.12.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-10675 (High) detected in linuxlinux-4.4.302 - ## CVE-2018-10675 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.1.15_CVE-2019-10220/commit/6a0d304d962ca933d73f507ce02157ef2791851c">6a0d304d962ca933d73f507ce02157ef2791851c</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mempolicy.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The do_get_mempolicy function in mm/mempolicy.c in the Linux kernel before 4.12.9 allows local users to cause a denial of service (use-after-free) or possibly have unspecified other impact via crafted system calls. <p>Publish Date: 2018-05-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10675>CVE-2018-10675</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10675">https://nvd.nist.gov/vuln/detail/CVE-2018-10675</a></p> <p>Release Date: 2018-05-02</p> <p>Fix Resolution: 4.12.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files mm mempolicy c vulnerability details the do get mempolicy function in mm mempolicy c in the linux kernel before allows local users to cause a denial of service use after free or possibly have unspecified other impact via crafted system calls publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
207,230
15,798,277,647
IssuesEvent
2021-04-02 18:22:40
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Intermittent test failure in master for :ballerina-cli:test
Priority/Blocker Team/DevTools Type/TestFailure
**Description:** Noticed intermittent `:ballerina-cli:test` failure in master for the last couple of days. Experienced today as well for PR #29351 for 46ff43575043cc2331e4334bbd51d6a233516b69. (https://github.com/ballerina-platform/ballerina-lang/runs/2199038738) But, syncing again with master and letting re-run the jobs get the build passed.
1.0
Intermittent test failure in master for :ballerina-cli:test - **Description:** Noticed intermittent `:ballerina-cli:test` failure in master for the last couple of days. Experienced today as well for PR #29351 for 46ff43575043cc2331e4334bbd51d6a233516b69. (https://github.com/ballerina-platform/ballerina-lang/runs/2199038738) But, syncing again with master and letting re-run the jobs get the build passed.
non_process
intermittent test failure in master for ballerina cli test description noticed intermittent ballerina cli test failure in master for the last couple of days experienced today as well for pr for but syncing again with master and letting re run the jobs get the build passed
0
291,100
8,920,052,380
IssuesEvent
2019-01-21 04:33:05
PolarisSS13/Polaris
https://api.github.com/repos/PolarisSS13/Polaris
closed
Simple bots speak only Gibberish
Bug Priority: CRITICAL
#### Brief description of the issue Medbots, Secbots, etc, all speak in gibberish instead of saying their usual messages #### What you expected to happen Same old bot quotes as always. #### What actually happened It's nonsense. #### Steps to reproduce 1.) Spawn Skrell 2.) Spawn Medbot 3.) Beat up Skrell 4.) Cover ears as Medbot spews red text #### Additional info: - **Server Revision**: Found using the "Show Server Revision" verb under the OOC tab. - **Anything else you may wish to add** (Location if it's a mapping issue, etc)
1.0
Simple bots speak only Gibberish - #### Brief description of the issue Medbots, Secbots, etc, all speak in gibberish instead of saying their usual messages #### What you expected to happen Same old bot quotes as always. #### What actually happened It's nonsense. #### Steps to reproduce 1.) Spawn Skrell 2.) Spawn Medbot 3.) Beat up Skrell 4.) Cover ears as Medbot spews red text #### Additional info: - **Server Revision**: Found using the "Show Server Revision" verb under the OOC tab. - **Anything else you may wish to add** (Location if it's a mapping issue, etc)
non_process
simple bots speak only gibberish brief description of the issue medbots secbots etc all speak in gibberish instead of saying their usual messages what you expected to happen same old bot quotes as always what actually happened it s nonsense steps to reproduce spawn skrell spawn medbot beat up skrell cover ears as medbot spews red text additional info server revision found using the show server revision verb under the ooc tab anything else you may wish to add location if it s a mapping issue etc
0
2,414
5,198,858,293
IssuesEvent
2017-01-23 19:17:37
MobileOrg/mobileorg
https://api.github.com/repos/MobileOrg/mobileorg
closed
UI development using Storyboards
development process
The question has come up in discussion about using iOS Storyboards for UI design. Currently the app makes little to no use of storyboards, with the exception of the newly added Launch screen in the development codebase. Storyboards have been around long enough to make their usage now generally standard I think. Since they are often promoted as the right direction for future app development we should get input from contributors. I am generally in favor of Storyboards, provided we do not create one massive storyboard to rule them all. I think it makes sense to have the Storyboard tell a story, logically grouping view controllers and behaviors but not trying to cover every single view. What other Pros/Cons do you see when compared with just code driven UI or existing .xib files?
1.0
UI development using Storyboards - The question has come up in discussion about using iOS Storyboards for UI design. Currently the app makes little to no use of storyboards, with the exception of the newly added Launch screen in the development codebase. Storyboards have been around long enough to make their usage now generally standard I think. Since they are often promoted as the right direction for future app development we should get input from contributors. I am generally in favor of Storyboards, provided we do not create one massive storyboard to rule them all. I think it makes sense to have the Storyboard tell a story, logically grouping view controllers and behaviors but not trying to cover every single view. What other Pros/Cons do you see when compared with just code driven UI or existing .xib files?
process
ui development using storyboards the question has come up in discussion about using ios storyboards for ui design currently the app makes little to no use of storyboards with the exception of the newly added launch screen in the development codebase storyboards have been around long enough to make their usage now generally standard i think since they are often promoted as the right direction for future app development we should get input from contributors i am generally in favor of storyboards provided we do not create one massive storyboard to rule them all i think it makes sense to have the storyboard tell a story logically grouping view controllers and behaviors but not trying to cover every single view what other pros cons do you see when compared with just code driven ui or existing xib files
1
29,079
13,043,145,977
IssuesEvent
2020-07-29 00:37:00
Azure/azure-rest-api-specs
https://api.github.com/repos/Azure/azure-rest-api-specs
closed
Support for API Management vnet join with multiple locations
API Management Service Attention
As far as I can tell it is not possible to create an APIM using the Rest API that supports both vNet Join and multiple locations. The API seems to only provide a way to join the primary location to a vNet, any additional locations cannot have a subnet ID provided for them to join. Can this be added to the Rest API?
1.0
Support for API Management vnet join with multiple locations - As far as I can tell it is not possible to create an APIM using the Rest API that supports both vNet Join and multiple locations. The API seems to only provide a way to join the primary location to a vNet, any additional locations cannot have a subnet ID provided for them to join. Can this be added to the Rest API?
non_process
support for api management vnet join with multiple locations as far as i can tell it is not possible to create an apim using the rest api that supports both vnet join and multiple locations the api seems to only provide a way to join the primary location to a vnet any additional locations cannot have a subnet id provided for them to join can this be added to the rest api
0
38,339
8,460,130,975
IssuesEvent
2018-10-22 17:56:06
stan-dev/stan
https://api.github.com/repos/stan-dev/stan
opened
fix parser warnings for semicolons in function arguments
bug code cleanup language
#### Summary: The parser is producing a confusing warning for the following ill-formed Stan program: ``` transformed data { real x = atan2(2 ; 3); } ``` #### Current Output: ``` SYNTAX ERROR, MESSAGE(S) FROM PARSER: error in '/Users/carp/temp2/confusing.stan' at line 2, column 17 ------------------------------------------------- 1: transformed data { 2: real x = atan2(2 ; 3); ^ 3: } ------------------------------------------------- PARSER EXPECTED: "(" ``` #### Expected Output: Something like "expected , or )" pointing before the semicolon. #### Current Version: v2.18.0
1.0
fix parser warnings for semicolons in function arguments - #### Summary: The parser is producing a confusing warning for the following ill-formed Stan program: ``` transformed data { real x = atan2(2 ; 3); } ``` #### Current Output: ``` SYNTAX ERROR, MESSAGE(S) FROM PARSER: error in '/Users/carp/temp2/confusing.stan' at line 2, column 17 ------------------------------------------------- 1: transformed data { 2: real x = atan2(2 ; 3); ^ 3: } ------------------------------------------------- PARSER EXPECTED: "(" ``` #### Expected Output: Something like "expected , or )" pointing before the semicolon. #### Current Version: v2.18.0
non_process
fix parser warnings for semicolons in function arguments summary the parser is producing a confusing warning for the following ill formed stan program transformed data real x current output syntax error message s from parser error in users carp confusing stan at line column transformed data real x parser expected expected output something like expected or pointing before the semicolon current version
0
4,902
7,782,290,527
IssuesEvent
2018-06-06 05:44:36
ppy/osu-web
https://api.github.com/repos/ppy/osu-web
closed
Creating empty map submissions bug
beatmap processor overdue
I was trying to upload a map and got the and aerror message that my Tags where too long https://puu.sh/A2Y6K/3637ae722d.png. I tried adjusting it a couple of times till it went through and then i noticed this on my profile https://puu.sh/A39kV/1c5b44ee23.png (see the "(not title)" submissions). I didn't make much out of it until i looked this up https://puu.sh/A398k/a771697b3a.png . As you can see there a 3 shadow maps which come up when i search for my name. (they dont appear the new website doe) i dont know if i shoudl be worried but, yeah just wanted to mention it. <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/57094069-creating-empty-map-submissions-bug?utm_campaign=plugin&utm_content=tracker%2F21853994&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F21853994&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
Creating empty map submissions bug - I was trying to upload a map and got the and aerror message that my Tags where too long https://puu.sh/A2Y6K/3637ae722d.png. I tried adjusting it a couple of times till it went through and then i noticed this on my profile https://puu.sh/A39kV/1c5b44ee23.png (see the "(not title)" submissions). I didn't make much out of it until i looked this up https://puu.sh/A398k/a771697b3a.png . As you can see there a 3 shadow maps which come up when i search for my name. (they dont appear the new website doe) i dont know if i shoudl be worried but, yeah just wanted to mention it. <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/57094069-creating-empty-map-submissions-bug?utm_campaign=plugin&utm_content=tracker%2F21853994&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F21853994&utm_medium=issues&utm_source=github). </bountysource-plugin>
process
creating empty map submissions bug i was trying to upload a map and got the and aerror message that my tags where too long i tried adjusting it a couple of times till it went through and then i noticed this on my profile see the not title submissions i didn t make much out of it until i looked this up as you can see there a shadow maps which come up when i search for my name they dont appear the new website doe i dont know if i shoudl be worried but yeah just wanted to mention it want to back this issue we accept bounties via
1
684,418
23,417,569,525
IssuesEvent
2022-08-13 07:06:40
logseq/logseq
https://api.github.com/repos/logseq/logseq
closed
`this._ctx.options.dotConfigRoot` is `null` while unpacking a plugin
priority-A plugins
### What happened? On a fresh install of LogSeq, we were trying to load an unpacked plugin that we're developing and were hit with a typeerror where the root of the issue is that `this._ctx.options.dotConfigRoot` is still null on this line: https://github.com/logseq/logseq/blob/master/libs/src/LSPlugin.core.ts#L1090 That value holding `null` eventually enters `path.normalize` which eventually throws a typeerror trying to call `.charAt` This all prevents the loading of our plugin in development. ### Reproduce the Bug 1. Click load unpacked plugin 2. Select the directory representing the plugin ### Expected Behavior Plugin to load successfully ### Screenshots Could edit the issue later with a loom video as this issue is affecting my colleague but not me. ### Desktop Platform Information Will edit later ### Mobile Platform Information _No response_ ### Additional Context _No response_
1.0
`this._ctx.options.dotConfigRoot` is `null` while unpacking a plugin - ### What happened? On a fresh install of LogSeq, we were trying to load an unpacked plugin that we're developing and were hit with a typeerror where the root of the issue is that `this._ctx.options.dotConfigRoot` is still null on this line: https://github.com/logseq/logseq/blob/master/libs/src/LSPlugin.core.ts#L1090 That value holding `null` eventually enters `path.normalize` which eventually throws a typeerror trying to call `.charAt` This all prevents the loading of our plugin in development. ### Reproduce the Bug 1. Click load unpacked plugin 2. Select the directory representing the plugin ### Expected Behavior Plugin to load successfully ### Screenshots Could edit the issue later with a loom video as this issue is affecting my colleague but not me. ### Desktop Platform Information Will edit later ### Mobile Platform Information _No response_ ### Additional Context _No response_
non_process
this ctx options dotconfigroot is null while unpacking a plugin what happened on a fresh install of logseq we were trying to load an unpacked plugin that we re developing and were hit with a typeerror where the root of the issue is that this ctx options dotconfigroot is still null on this line that value holding null eventually enters path normalize which eventually throws a typeerror trying to call charat this all prevents the loading of our plugin in development reproduce the bug click load unpacked plugin select the directory representing the plugin expected behavior plugin to load successfully screenshots could edit the issue later with a loom video as this issue is affecting my colleague but not me desktop platform information will edit later mobile platform information no response additional context no response
0
13,216
15,687,375,807
IssuesEvent
2021-03-25 13:36:56
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
RuntimeError in multiprocessing/reductions.py when using multiprocessing.Queue with tensors and multiple threads
module: multiprocessing module: multithreading triaged
## πŸ› Bug I have a system in which multiple background processes generate tensors and put them on individual `multiprocessing.Queue`s, one queue per process. In the main process, I have one thread for each of these queues responsible for getting the tensors from the respective queue and processing them further. I occasionally get the following error in one of the consuming threads (full stack trace below): `RuntimeError: dictionary changed size during iteration` in `torch.multiprocessing.reductions.SharedCache.free_dead_references`, specifically [this line](https://github.com/pytorch/pytorch/blob/a0652c8f08f5257735f87106363258d1c14a2b52/torch/multiprocessing/reductions.py#L65) Looking at the implementation of `SharedCache`, I noticed that it makes assumption on the atomicity of certain operations: It assumes that `list(self.items())` in the line I reference above is atomic, so that the dictionary (`self`) cannot change during its execution. However, the `RuntimeError` I'm getting suggests otherwise. I think what happens is that another thread modifies the dictionary via `SharedCache.__setitem__` while `list(self.items())` is executed (note that all threads share the same global instance of `SharedCache` even though they operate on distinct queues). `SharedCache.free_dead_references` already uses a mutex to avoid race conditions when deleting items from the dictionary. However, this mutex does not cover `SharedCache.__setitem__`. To fix this issue, I would suggest to move the mutex to the latter function as follows: ```python def __setitem__(self, key, storage_ref): with self.lock: dict.__setitem__(self, key, storage_ref) if len(self) > self.limit: self.free_dead_references() ``` This would implicitly also provide mutual exclusion for `SharedCache.free_dead_references`. If this makes sense, I'm happy to create a PR for this change. ## To Reproduce Reproducing this issues seems to be tricky because it seems to depend on thread scheduling. I haven't had any luck reproducing it with a minimal example. Here is the stack trace from an actual training job: ``` [1,1]<stderr>:Exception in thread Thread-2: [1,1]<stderr>:Traceback (most recent call last): [1,1]<stderr>: File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner [1,1]<stderr>: self.run() [1,1]<stderr>: File "/usr/lib/python3.6/threading.py", line 864, in run [1,1]<stderr>: self._target(*self._args, **self._kwargs) [1,1]<stderr>: File "<removed>", line 1262, in _queue_fetcher_thread_fn [1,1]<stderr>: msg = self._inter_process_queue.get() [1,1]<stderr>: File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get [1,1]<stderr>: return _ForkingPickler.loads(res) [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 300, in rebuild_storage_fd [1,1]<stderr>: shared_cache[fd_id(fd)] = StorageWeakRef(storage) [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 60, in __setitem__ [1,1]<stderr>: self.free_dead_references() [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 67, in free_dead_references [1,1]<stderr>: for key, storage_ref in list(self.items()): [1,1]<stderr>:RuntimeError: dictionary changed size during iteration ``` ## Expected behavior PyTorch's custom reduction implementation for sending tensors over `multiprocessing.Queue`s should be thread-safe. The RuntimeError should not occur. ## Environment PyTorch version: 1.5.1 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 450.80.02 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 Versions of relevant libraries: [pip3] numpy==1.19.0 [pip3] pytorch-crf==0.7.2 [pip3] torch==1.5.1 [pip3] torchvision==0.6.1 [conda] Could not collect ## Additional context I also posted about this issue on the PyTorch Forums [here](https://discuss.pytorch.org/t/runtimeerror-in-pytorchs-reductions-py-when-using-multiprocessing-queues-with-multiple-threads/114246) but haven't received any response.
1.0
RuntimeError in multiprocessing/reductions.py when using multiprocessing.Queue with tensors and multiple threads - ## πŸ› Bug I have a system in which multiple background processes generate tensors and put them on individual `multiprocessing.Queue`s, one queue per process. In the main process, I have one thread for each of these queues responsible for getting the tensors from the respective queue and processing them further. I occasionally get the following error in one of the consuming threads (full stack trace below): `RuntimeError: dictionary changed size during iteration` in `torch.multiprocessing.reductions.SharedCache.free_dead_references`, specifically [this line](https://github.com/pytorch/pytorch/blob/a0652c8f08f5257735f87106363258d1c14a2b52/torch/multiprocessing/reductions.py#L65) Looking at the implementation of `SharedCache`, I noticed that it makes assumption on the atomicity of certain operations: It assumes that `list(self.items())` in the line I reference above is atomic, so that the dictionary (`self`) cannot change during its execution. However, the `RuntimeError` I'm getting suggests otherwise. I think what happens is that another thread modifies the dictionary via `SharedCache.__setitem__` while `list(self.items())` is executed (note that all threads share the same global instance of `SharedCache` even though they operate on distinct queues). `SharedCache.free_dead_references` already uses a mutex to avoid race conditions when deleting items from the dictionary. However, this mutex does not cover `SharedCache.__setitem__`. To fix this issue, I would suggest to move the mutex to the latter function as follows: ```python def __setitem__(self, key, storage_ref): with self.lock: dict.__setitem__(self, key, storage_ref) if len(self) > self.limit: self.free_dead_references() ``` This would implicitly also provide mutual exclusion for `SharedCache.free_dead_references`. If this makes sense, I'm happy to create a PR for this change. ## To Reproduce Reproducing this issues seems to be tricky because it seems to depend on thread scheduling. I haven't had any luck reproducing it with a minimal example. Here is the stack trace from an actual training job: ``` [1,1]<stderr>:Exception in thread Thread-2: [1,1]<stderr>:Traceback (most recent call last): [1,1]<stderr>: File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner [1,1]<stderr>: self.run() [1,1]<stderr>: File "/usr/lib/python3.6/threading.py", line 864, in run [1,1]<stderr>: self._target(*self._args, **self._kwargs) [1,1]<stderr>: File "<removed>", line 1262, in _queue_fetcher_thread_fn [1,1]<stderr>: msg = self._inter_process_queue.get() [1,1]<stderr>: File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get [1,1]<stderr>: return _ForkingPickler.loads(res) [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 300, in rebuild_storage_fd [1,1]<stderr>: shared_cache[fd_id(fd)] = StorageWeakRef(storage) [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 60, in __setitem__ [1,1]<stderr>: self.free_dead_references() [1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 67, in free_dead_references [1,1]<stderr>: for key, storage_ref in list(self.items()): [1,1]<stderr>:RuntimeError: dictionary changed size during iteration ``` ## Expected behavior PyTorch's custom reduction implementation for sending tensors over `multiprocessing.Queue`s should be thread-safe. The RuntimeError should not occur. ## Environment PyTorch version: 1.5.1 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 450.80.02 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 Versions of relevant libraries: [pip3] numpy==1.19.0 [pip3] pytorch-crf==0.7.2 [pip3] torch==1.5.1 [pip3] torchvision==0.6.1 [conda] Could not collect ## Additional context I also posted about this issue on the PyTorch Forums [here](https://discuss.pytorch.org/t/runtimeerror-in-pytorchs-reductions-py-when-using-multiprocessing-queues-with-multiple-threads/114246) but haven't received any response.
process
runtimeerror in multiprocessing reductions py when using multiprocessing queue with tensors and multiple threads πŸ› bug i have a system in which multiple background processes generate tensors and put them on individual multiprocessing queue s one queue per process in the main process i have one thread for each of these queues responsible for getting the tensors from the respective queue and processing them further i occasionally get the following error in one of the consuming threads full stack trace below runtimeerror dictionary changed size during iteration in torch multiprocessing reductions sharedcache free dead references specifically looking at the implementation of sharedcache i noticed that it makes assumption on the atomicity of certain operations it assumes that list self items in the line i reference above is atomic so that the dictionary self cannot change during its execution however the runtimeerror i m getting suggests otherwise i think what happens is that another thread modifies the dictionary via sharedcache setitem while list self items is executed note that all threads share the same global instance of sharedcache even though they operate on distinct queues sharedcache free dead references already uses a mutex to avoid race conditions when deleting items from the dictionary however this mutex does not cover sharedcache setitem to fix this issue i would suggest to move the mutex to the latter function as follows python def setitem self key storage ref with self lock dict setitem self key storage ref if len self self limit self free dead references this would implicitly also provide mutual exclusion for sharedcache free dead references if this makes sense i m happy to create a pr for this change to reproduce reproducing this issues seems to be tricky because it seems to depend on thread scheduling i haven t had any luck reproducing it with a minimal example here is the stack trace from an actual training job exception in thread thread traceback most recent call last file usr lib threading py line in bootstrap inner self run file usr lib threading py line in run self target self args self kwargs file line in queue fetcher thread fn msg self inter process queue get file usr lib multiprocessing queues py line in get return forkingpickler loads res file usr local lib dist packages torch multiprocessing reductions py line in rebuild storage fd shared cache storageweakref storage file usr local lib dist packages torch multiprocessing reductions py line in setitem self free dead references file usr local lib dist packages torch multiprocessing reductions py line in free dead references for key storage ref in list self items runtimeerror dictionary changed size during iteration expected behavior pytorch s custom reduction implementation for sending tensors over multiprocessing queue s should be thread safe the runtimeerror should not occur environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version could not collect python version is cuda available yes cuda runtime version gpu models and configuration gpu tesla gpu tesla gpu tesla gpu tesla gpu tesla gpu tesla gpu tesla gpu tesla nvidia driver version cudnn version usr lib linux gnu libcudnn so versions of relevant libraries numpy pytorch crf torch torchvision could not collect additional context i also posted about this issue on the pytorch forums but haven t received any response
1
496,848
14,356,779,024
IssuesEvent
2020-11-30 12:05:10
jenkins-x/jx
https://api.github.com/repos/jenkins-x/jx
closed
build-step-git-merge executes multiple times
area/tekton kind/bug lifecycle/rotten priority/important-soon
build-step-git-merge step gets executed multiple times with using meta pipeline. It gets executed more than once in the meta pipeline, but then also in the execution pipeline. It should only occur once in the meta pipeline.
1.0
build-step-git-merge executes multiple times - build-step-git-merge step gets executed multiple times with using meta pipeline. It gets executed more than once in the meta pipeline, but then also in the execution pipeline. It should only occur once in the meta pipeline.
non_process
build step git merge executes multiple times build step git merge step gets executed multiple times with using meta pipeline it gets executed more than once in the meta pipeline but then also in the execution pipeline it should only occur once in the meta pipeline
0
9,905
12,908,719,267
IssuesEvent
2020-07-15 07:57:05
ZbayApp/zbay
https://api.github.com/repos/ZbayApp/zbay
closed
Team should receive automated alerts when faucet fails
dev process
When the faucet breaks it negatively affects user experience. We should get an automated SMS alert when it goes down, using pingdom or similar.
1.0
Team should receive automated alerts when faucet fails - When the faucet breaks it negatively affects user experience. We should get an automated SMS alert when it goes down, using pingdom or similar.
process
team should receive automated alerts when faucet fails when the faucet breaks it negatively affects user experience we should get an automated sms alert when it goes down using pingdom or similar
1
90,313
15,856,104,729
IssuesEvent
2021-04-08 01:31:51
KingdomB/liri-node-app
https://api.github.com/repos/KingdomB/liri-node-app
opened
CVE-2020-15366 (Medium) detected in ajv-6.10.0.tgz
security vulnerability
## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.0.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p> <p>Path to dependency file: /liri-node-app/package.json</p> <p>Path to vulnerable library: liri-node-app/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - node-spotify-api-1.1.1.tgz (Root Library) - request-2.88.0.tgz - har-validator-5.1.3.tgz - :x: **ajv-6.10.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15366 (Medium) detected in ajv-6.10.0.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.0.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p> <p>Path to dependency file: /liri-node-app/package.json</p> <p>Path to vulnerable library: liri-node-app/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - node-spotify-api-1.1.1.tgz (Root Library) - request-2.88.0.tgz - har-validator-5.1.3.tgz - :x: **ajv-6.10.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in ajv tgz cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file liri node app package json path to vulnerable library liri node app node modules ajv package json dependency hierarchy node spotify api tgz root library request tgz har validator tgz x ajv tgz vulnerable library vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource
0
6,242
9,199,554,045
IssuesEvent
2019-03-07 15:11:22
astrolabsoftware/fink-broker
https://api.github.com/repos/astrolabsoftware/fink-broker
opened
Classification: inspect failures
bug processing
While performing early classification (`bin/classify.py`), if the request to the `xMatch` service fails we set the output to `Fail`. But that is a very poor way of doing, as the failure can come from many different reasons. It would be good to investigate in details the different reasons for failing the classification.
1.0
Classification: inspect failures - While performing early classification (`bin/classify.py`), if the request to the `xMatch` service fails we set the output to `Fail`. But that is a very poor way of doing, as the failure can come from many different reasons. It would be good to investigate in details the different reasons for failing the classification.
process
classification inspect failures while performing early classification bin classify py if the request to the xmatch service fails we set the output to fail but that is a very poor way of doing as the failure can come from many different reasons it would be good to investigate in details the different reasons for failing the classification
1
20,328
26,968,934,660
IssuesEvent
2023-02-09 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Thu, 9 Feb 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### A Dynamic Graph CNN with Cross-Representation Distillation for Event-Based Recognition - **Authors:** Yongjian Deng, Hao Chen, Bochen Xie, Hai Liu, Youfu Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04177 - **Pdf link:** https://arxiv.org/pdf/2302.04177 - **Abstract** It is a popular solution to convert events into dense frame-based representations to use the well-pretrained CNNs in hand. Although with appealing performance, this line of work sacrifices the sparsity/temporal precision of events and usually necessitates heavy-weight models, thereby largely weakening the advantages and real-life application potential of event cameras. A more application-friendly way is to design deep graph models for learning sparse point-based representations from events. Yet, the efficacy of these graph models is far behind the frame-based counterpart with two key limitations: ($i$) simple graph construction strategies without carefully integrating the variant attributes (i.e., semantics, spatial and temporal coordinates) for each vertex, leading to biased graph representation; ($ii$) deficient learning because the lack of well pretraining models available. Here we solve the first problem by introducing a new event-based graph CNN (EDGCN), with a dynamic aggregation module to integrate all attributes of vertices adaptively. To alleviate the learning difficulty, we propose to leverage the dense representation counterpart of events as a cross-representation auxiliary to supply additional supervision and prior knowledge for the event graph. To this end, we form a frame-to-graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross-representation gaps across layers. Extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy (Core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance) ## Keyword: event camera ### EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse Night Conditions - **Authors:** Peilun Shi, Jiachuan Peng, Jianing Qiu, Xinwei Ju, Frank Po Wen Lo, Benny Lo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.03860 - **Pdf link:** https://arxiv.org/pdf/2302.03860 - **Abstract** Accurate depth estimation under adverse night conditions has practical impact and applications, such as on autonomous driving and rescue robots. In this work, we studied monocular depth estimation at night time in which various adverse weather, light, and different road conditions exist, with data captured in both RGB and event modalities. Event camera can better capture intensity changes by virtue of its high dynamic range (HDR), which is particularly suitable to be applied at adverse night conditions in which the amount of light is limited in the scene. Although event data can retain visual perception that conventional RGB camera may fail to capture, the lack of texture and color information of event data hinders its applicability to accurately estimate depth alone. To tackle this problem, we propose an event-vision based framework that integrates low-light enhancement for the RGB source, and exploits the complementary merits of RGB and event data. A dataset that includes paired RGB and event streams, and ground truth depth maps has been constructed. Comprehensive experiments have been conducted, and the impact of different adverse weather combinations on the performance of framework has also been investigated. The results have shown that our proposed framework can better estimate monocular depth at adverse nights than six baselines. ### A Dynamic Graph CNN with Cross-Representation Distillation for Event-Based Recognition - **Authors:** Yongjian Deng, Hao Chen, Bochen Xie, Hai Liu, Youfu Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04177 - **Pdf link:** https://arxiv.org/pdf/2302.04177 - **Abstract** It is a popular solution to convert events into dense frame-based representations to use the well-pretrained CNNs in hand. Although with appealing performance, this line of work sacrifices the sparsity/temporal precision of events and usually necessitates heavy-weight models, thereby largely weakening the advantages and real-life application potential of event cameras. A more application-friendly way is to design deep graph models for learning sparse point-based representations from events. Yet, the efficacy of these graph models is far behind the frame-based counterpart with two key limitations: ($i$) simple graph construction strategies without carefully integrating the variant attributes (i.e., semantics, spatial and temporal coordinates) for each vertex, leading to biased graph representation; ($ii$) deficient learning because the lack of well pretraining models available. Here we solve the first problem by introducing a new event-based graph CNN (EDGCN), with a dynamic aggregation module to integrate all attributes of vertices adaptively. To alleviate the learning difficulty, we propose to leverage the dense representation counterpart of events as a cross-representation auxiliary to supply additional supervision and prior knowledge for the event graph. To this end, we form a frame-to-graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross-representation gaps across layers. Extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy (Core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance) ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Hyperspectral Image Compression Using Implicit Neural Representation - **Authors:** Shima Rezasoltani, Faisal Z. Qureshi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2302.04129 - **Pdf link:** https://arxiv.org/pdf/2302.04129 - **Abstract** Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly-sized color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network $\Phi_\theta$ with sinusoidal activation functions ``learns'' to map pixel locations to pixel intensities for a given hyperspectral image $I$. $\Phi_\theta$ thus acts as a compressed encoding of this image. The original image is reconstructed by evaluating $\Phi_\theta$ at each pixel location. We have evaluated our method on four benchmarks -- Indian Pines, Cuprite, Pavia University, and Jasper Ridge -- and we show the proposed method achieves better compression than JPEG, JPEG2000, PCA-DCT, and HVEC at low bitrates. ## Keyword: RAW ### The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition - **Authors:** Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04002 - **Pdf link:** https://arxiv.org/pdf/2302.04002 - **Abstract** Open-set Recognition (OSR) aims to identify test samples whose classes are not seen during the training process. Recently, Unified Open-set Recognition (UOSR) has been proposed to reject not only unknown samples but also known but wrongly classified samples, which tends to be more practical in real-world applications. The UOSR draws little attention since it is proposed, but we find sometimes it is even more practical than OSR in the real world applications, as evaluation results of known but wrongly classified samples are also wrong like unknown samples. In this paper, we deeply analyze the UOSR task under different training and evaluation settings to shed light on this promising research direction. For this purpose, we first evaluate the UOSR performance of several OSR methods and show a significant finding that the UOSR performance consistently surpasses the OSR performance by a large margin for the same method. We show that the reason lies in the known but wrongly classified samples, as their uncertainty distribution is extremely close to unknown samples rather than known and correctly classified samples. Second, we analyze how the two training settings of OSR (i.e., pre-training and outlier exposure) influence the UOSR. We find although they are both beneficial for distinguishing known and correctly classified samples from unknown samples, pre-training is also helpful for identifying known but wrongly classified samples while outlier exposure is not. In addition to different training settings, we also formulate a new evaluation setting for UOSR which is called few-shot UOSR, where only one or five samples per unknown class are available during evaluation to help identify unknown samples. We propose FS-KNNS for the few-shot UOSR to achieve state-of-the-art performance under all settings. ## Keyword: raw image There is no result
2.0
New submissions for Thu, 9 Feb 23 - ## Keyword: events ### A Dynamic Graph CNN with Cross-Representation Distillation for Event-Based Recognition - **Authors:** Yongjian Deng, Hao Chen, Bochen Xie, Hai Liu, Youfu Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04177 - **Pdf link:** https://arxiv.org/pdf/2302.04177 - **Abstract** It is a popular solution to convert events into dense frame-based representations to use the well-pretrained CNNs in hand. Although with appealing performance, this line of work sacrifices the sparsity/temporal precision of events and usually necessitates heavy-weight models, thereby largely weakening the advantages and real-life application potential of event cameras. A more application-friendly way is to design deep graph models for learning sparse point-based representations from events. Yet, the efficacy of these graph models is far behind the frame-based counterpart with two key limitations: ($i$) simple graph construction strategies without carefully integrating the variant attributes (i.e., semantics, spatial and temporal coordinates) for each vertex, leading to biased graph representation; ($ii$) deficient learning because the lack of well pretraining models available. Here we solve the first problem by introducing a new event-based graph CNN (EDGCN), with a dynamic aggregation module to integrate all attributes of vertices adaptively. To alleviate the learning difficulty, we propose to leverage the dense representation counterpart of events as a cross-representation auxiliary to supply additional supervision and prior knowledge for the event graph. To this end, we form a frame-to-graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross-representation gaps across layers. Extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy (Core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance) ## Keyword: event camera ### EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse Night Conditions - **Authors:** Peilun Shi, Jiachuan Peng, Jianing Qiu, Xinwei Ju, Frank Po Wen Lo, Benny Lo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.03860 - **Pdf link:** https://arxiv.org/pdf/2302.03860 - **Abstract** Accurate depth estimation under adverse night conditions has practical impact and applications, such as on autonomous driving and rescue robots. In this work, we studied monocular depth estimation at night time in which various adverse weather, light, and different road conditions exist, with data captured in both RGB and event modalities. Event camera can better capture intensity changes by virtue of its high dynamic range (HDR), which is particularly suitable to be applied at adverse night conditions in which the amount of light is limited in the scene. Although event data can retain visual perception that conventional RGB camera may fail to capture, the lack of texture and color information of event data hinders its applicability to accurately estimate depth alone. To tackle this problem, we propose an event-vision based framework that integrates low-light enhancement for the RGB source, and exploits the complementary merits of RGB and event data. A dataset that includes paired RGB and event streams, and ground truth depth maps has been constructed. Comprehensive experiments have been conducted, and the impact of different adverse weather combinations on the performance of framework has also been investigated. The results have shown that our proposed framework can better estimate monocular depth at adverse nights than six baselines. ### A Dynamic Graph CNN with Cross-Representation Distillation for Event-Based Recognition - **Authors:** Yongjian Deng, Hao Chen, Bochen Xie, Hai Liu, Youfu Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04177 - **Pdf link:** https://arxiv.org/pdf/2302.04177 - **Abstract** It is a popular solution to convert events into dense frame-based representations to use the well-pretrained CNNs in hand. Although with appealing performance, this line of work sacrifices the sparsity/temporal precision of events and usually necessitates heavy-weight models, thereby largely weakening the advantages and real-life application potential of event cameras. A more application-friendly way is to design deep graph models for learning sparse point-based representations from events. Yet, the efficacy of these graph models is far behind the frame-based counterpart with two key limitations: ($i$) simple graph construction strategies without carefully integrating the variant attributes (i.e., semantics, spatial and temporal coordinates) for each vertex, leading to biased graph representation; ($ii$) deficient learning because the lack of well pretraining models available. Here we solve the first problem by introducing a new event-based graph CNN (EDGCN), with a dynamic aggregation module to integrate all attributes of vertices adaptively. To alleviate the learning difficulty, we propose to leverage the dense representation counterpart of events as a cross-representation auxiliary to supply additional supervision and prior knowledge for the event graph. To this end, we form a frame-to-graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross-representation gaps across layers. Extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy (Core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance) ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Hyperspectral Image Compression Using Implicit Neural Representation - **Authors:** Shima Rezasoltani, Faisal Z. Qureshi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2302.04129 - **Pdf link:** https://arxiv.org/pdf/2302.04129 - **Abstract** Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly-sized color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network $\Phi_\theta$ with sinusoidal activation functions ``learns'' to map pixel locations to pixel intensities for a given hyperspectral image $I$. $\Phi_\theta$ thus acts as a compressed encoding of this image. The original image is reconstructed by evaluating $\Phi_\theta$ at each pixel location. We have evaluated our method on four benchmarks -- Indian Pines, Cuprite, Pavia University, and Jasper Ridge -- and we show the proposed method achieves better compression than JPEG, JPEG2000, PCA-DCT, and HVEC at low bitrates. ## Keyword: RAW ### The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition - **Authors:** Jun Cen, Di Luan, Shiwei Zhang, Yixuan Pei, Yingya Zhang, Deli Zhao, Shaojie Shen, Qifeng Chen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.04002 - **Pdf link:** https://arxiv.org/pdf/2302.04002 - **Abstract** Open-set Recognition (OSR) aims to identify test samples whose classes are not seen during the training process. Recently, Unified Open-set Recognition (UOSR) has been proposed to reject not only unknown samples but also known but wrongly classified samples, which tends to be more practical in real-world applications. The UOSR draws little attention since it is proposed, but we find sometimes it is even more practical than OSR in the real world applications, as evaluation results of known but wrongly classified samples are also wrong like unknown samples. In this paper, we deeply analyze the UOSR task under different training and evaluation settings to shed light on this promising research direction. For this purpose, we first evaluate the UOSR performance of several OSR methods and show a significant finding that the UOSR performance consistently surpasses the OSR performance by a large margin for the same method. We show that the reason lies in the known but wrongly classified samples, as their uncertainty distribution is extremely close to unknown samples rather than known and correctly classified samples. Second, we analyze how the two training settings of OSR (i.e., pre-training and outlier exposure) influence the UOSR. We find although they are both beneficial for distinguishing known and correctly classified samples from unknown samples, pre-training is also helpful for identifying known but wrongly classified samples while outlier exposure is not. In addition to different training settings, we also formulate a new evaluation setting for UOSR which is called few-shot UOSR, where only one or five samples per unknown class are available during evaluation to help identify unknown samples. We propose FS-KNNS for the few-shot UOSR to achieve state-of-the-art performance under all settings. ## Keyword: raw image There is no result
process
new submissions for thu feb keyword events a dynamic graph cnn with cross representation distillation for event based recognition authors yongjian deng hao chen bochen xie hai liu youfu li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract it is a popular solution to convert events into dense frame based representations to use the well pretrained cnns in hand although with appealing performance this line of work sacrifices the sparsity temporal precision of events and usually necessitates heavy weight models thereby largely weakening the advantages and real life application potential of event cameras a more application friendly way is to design deep graph models for learning sparse point based representations from events yet the efficacy of these graph models is far behind the frame based counterpart with two key limitations i simple graph construction strategies without carefully integrating the variant attributes i e semantics spatial and temporal coordinates for each vertex leading to biased graph representation ii deficient learning because the lack of well pretraining models available here we solve the first problem by introducing a new event based graph cnn edgcn with a dynamic aggregation module to integrate all attributes of vertices adaptively to alleviate the learning difficulty we propose to leverage the dense representation counterpart of events as a cross representation auxiliary to supply additional supervision and prior knowledge for the event graph to this end we form a frame to graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross representation gaps across layers extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance keyword event camera even an event based framework for monocular depth estimation at adverse night conditions authors peilun shi jiachuan peng jianing qiu xinwei ju frank po wen lo benny lo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract accurate depth estimation under adverse night conditions has practical impact and applications such as on autonomous driving and rescue robots in this work we studied monocular depth estimation at night time in which various adverse weather light and different road conditions exist with data captured in both rgb and event modalities event camera can better capture intensity changes by virtue of its high dynamic range hdr which is particularly suitable to be applied at adverse night conditions in which the amount of light is limited in the scene although event data can retain visual perception that conventional rgb camera may fail to capture the lack of texture and color information of event data hinders its applicability to accurately estimate depth alone to tackle this problem we propose an event vision based framework that integrates low light enhancement for the rgb source and exploits the complementary merits of rgb and event data a dataset that includes paired rgb and event streams and ground truth depth maps has been constructed comprehensive experiments have been conducted and the impact of different adverse weather combinations on the performance of framework has also been investigated the results have shown that our proposed framework can better estimate monocular depth at adverse nights than six baselines a dynamic graph cnn with cross representation distillation for event based recognition authors yongjian deng hao chen bochen xie hai liu youfu li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract it is a popular solution to convert events into dense frame based representations to use the well pretrained cnns in hand although with appealing performance this line of work sacrifices the sparsity temporal precision of events and usually necessitates heavy weight models thereby largely weakening the advantages and real life application potential of event cameras a more application friendly way is to design deep graph models for learning sparse point based representations from events yet the efficacy of these graph models is far behind the frame based counterpart with two key limitations i simple graph construction strategies without carefully integrating the variant attributes i e semantics spatial and temporal coordinates for each vertex leading to biased graph representation ii deficient learning because the lack of well pretraining models available here we solve the first problem by introducing a new event based graph cnn edgcn with a dynamic aggregation module to integrate all attributes of vertices adaptively to alleviate the learning difficulty we propose to leverage the dense representation counterpart of events as a cross representation auxiliary to supply additional supervision and prior knowledge for the event graph to this end we form a frame to graph transfer learning framework with a customized hybrid distillation loss to well respect the varying cross representation gaps across layers extensive experiments on multiple vision tasks validate the effectiveness and high generalization ability of our proposed model and distillation strategy core components of our codes are submitted with supplementary material and will be made publicly available upon acceptance keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression hyperspectral image compression using implicit neural representation authors shima rezasoltani faisal z qureshi subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract hyperspectral images which record the electromagnetic spectrum for a pixel in the image of a scene often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly sized color image consequently concomitant with the decreasing cost of capturing these images there is a need to develop efficient techniques for storing transmitting and analyzing hyperspectral images this paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network phi theta with sinusoidal activation functions learns to map pixel locations to pixel intensities for a given hyperspectral image i phi theta thus acts as a compressed encoding of this image the original image is reconstructed by evaluating phi theta at each pixel location we have evaluated our method on four benchmarks indian pines cuprite pavia university and jasper ridge and we show the proposed method achieves better compression than jpeg pca dct and hvec at low bitrates keyword raw the devil is in the wrongly classified samples towards unified open set recognition authors jun cen di luan shiwei zhang yixuan pei yingya zhang deli zhao shaojie shen qifeng chen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract open set recognition osr aims to identify test samples whose classes are not seen during the training process recently unified open set recognition uosr has been proposed to reject not only unknown samples but also known but wrongly classified samples which tends to be more practical in real world applications the uosr draws little attention since it is proposed but we find sometimes it is even more practical than osr in the real world applications as evaluation results of known but wrongly classified samples are also wrong like unknown samples in this paper we deeply analyze the uosr task under different training and evaluation settings to shed light on this promising research direction for this purpose we first evaluate the uosr performance of several osr methods and show a significant finding that the uosr performance consistently surpasses the osr performance by a large margin for the same method we show that the reason lies in the known but wrongly classified samples as their uncertainty distribution is extremely close to unknown samples rather than known and correctly classified samples second we analyze how the two training settings of osr i e pre training and outlier exposure influence the uosr we find although they are both beneficial for distinguishing known and correctly classified samples from unknown samples pre training is also helpful for identifying known but wrongly classified samples while outlier exposure is not in addition to different training settings we also formulate a new evaluation setting for uosr which is called few shot uosr where only one or five samples per unknown class are available during evaluation to help identify unknown samples we propose fs knns for the few shot uosr to achieve state of the art performance under all settings keyword raw image there is no result
1
5,858
8,681,529,412
IssuesEvent
2018-12-01 21:08:52
nicolas2lee/Big-data-architecture
https://api.github.com/repos/nicolas2lee/Big-data-architecture
opened
Study of spark
processing engine
Study of spark as a big data solution: - spark sql for batch - spark streaming for real time -spark ml for data science
1.0
Study of spark - Study of spark as a big data solution: - spark sql for batch - spark streaming for real time -spark ml for data science
process
study of spark study of spark as a big data solution spark sql for batch spark streaming for real time spark ml for data science
1
260,620
22,635,636,616
IssuesEvent
2022-06-30 18:40:50
broadinstitute/gatk
https://api.github.com/repos/broadinstitute/gatk
closed
The Utils.concat() methods need unit tests
tests
The various overloads of `Utils.concat()` are not covered by unit tests.
1.0
The Utils.concat() methods need unit tests - The various overloads of `Utils.concat()` are not covered by unit tests.
non_process
the utils concat methods need unit tests the various overloads of utils concat are not covered by unit tests
0
13,155
15,573,545,114
IssuesEvent
2021-03-17 08:44:22
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Try only running relevant tests in GitHub Actions CI
kind/improvement process/candidate team/migrations topic: internal topic: tests
## Problem GitHub Actions CI runs for every commit / PR all the tests relevant or not. (except if the file changed are in the path ignored). ## Suggested solution We could run CI only on relevant/changed tests and projects in our workflow like with this example: https://github.com/planetscale/vitess-framework-testing/blob/master/.github/workflows/test.yml Might be interesting for us in places like e2e tests, examples but also CLI etc. It calculates what needs to run based on changed files with jitterbit/get-changed-files We can probably adapt this to our needs and try a simple and basic approach first and increase complexity later if needed, and default to running all tests.
1.0
Try only running relevant tests in GitHub Actions CI - ## Problem GitHub Actions CI runs for every commit / PR all the tests relevant or not. (except if the file changed are in the path ignored). ## Suggested solution We could run CI only on relevant/changed tests and projects in our workflow like with this example: https://github.com/planetscale/vitess-framework-testing/blob/master/.github/workflows/test.yml Might be interesting for us in places like e2e tests, examples but also CLI etc. It calculates what needs to run based on changed files with jitterbit/get-changed-files We can probably adapt this to our needs and try a simple and basic approach first and increase complexity later if needed, and default to running all tests.
process
try only running relevant tests in github actions ci problem github actions ci runs for every commit pr all the tests relevant or not except if the file changed are in the path ignored suggested solution we could run ci only on relevant changed tests and projects in our workflow like with this example might be interesting for us in places like tests examples but also cli etc it calculates what needs to run based on changed files with jitterbit get changed files we can probably adapt this to our needs and try a simple and basic approach first and increase complexity later if needed and default to running all tests
1
13,280
15,760,659,601
IssuesEvent
2021-03-31 09:11:48
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Cannot migrate relational field from optional to required with @default(cuid())
bug/2-confirmed kind/bug process/candidate status/needs-action team/migrations
Database: PostgreSQL (Digital Ocean) Unable to convert an optional field to a required field: ``` model Dentist { id String @default(cuid()) @id name String account Account? } ``` migrate to ``` model Dentist { id String @default(cuid()) @id name String account Account } ``` Lift gives me the following prompt: ```You are about to alter the column `account` on the `Dentist` table, which still contains 3 non-null values. The data in that column will be lost.``` After confirming that this is ok (y), lift attempts to run and throws the following error: ``` thread 'tokio-runtime-worker-1' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:378:21 ```
1.0
Cannot migrate relational field from optional to required with @default(cuid()) - Database: PostgreSQL (Digital Ocean) Unable to convert an optional field to a required field: ``` model Dentist { id String @default(cuid()) @id name String account Account? } ``` migrate to ``` model Dentist { id String @default(cuid()) @id name String account Account } ``` Lift gives me the following prompt: ```You are about to alter the column `account` on the `Dentist` table, which still contains 3 non-null values. The data in that column will be lost.``` After confirming that this is ok (y), lift attempts to run and throws the following error: ``` thread 'tokio-runtime-worker-1' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:378:21 ```
process
cannot migrate relational field from optional to required with default cuid database postgresql digital ocean unable to convert an optional field to a required field model dentist id string default cuid id name string account account migrate to model dentist id string default cuid id name string account account lift gives me the following prompt you are about to alter the column account on the dentist table which still contains non null values the data in that column will be lost after confirming that this is ok y lift attempts to run and throws the following error thread tokio runtime worker panicked at called option unwrap on a none value src libcore option rs
1
15,226
19,098,994,905
IssuesEvent
2021-11-29 20:01:40
pda0004/Case-Studies-Case-2-Team-1
https://api.github.com/repos/pda0004/Case-Studies-Case-2-Team-1
closed
Look into this research paper and write out a general description of how exactly certain factors affects the yield of certain crops
Data Processing/Augmentation
Paper: https://www.researchgate.net/publication/342994002_Factors_Affecting_Yield_of_Crops
1.0
Look into this research paper and write out a general description of how exactly certain factors affects the yield of certain crops - Paper: https://www.researchgate.net/publication/342994002_Factors_Affecting_Yield_of_Crops
process
look into this research paper and write out a general description of how exactly certain factors affects the yield of certain crops paper
1
15,555
19,703,503,229
IssuesEvent
2022-01-12 19:07:59
googleapis/java-private-catalog
https://api.github.com/repos/googleapis/java-private-catalog
opened
Your .repo-metadata.json file has a problem πŸ€’
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan πŸ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'private-catalog' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem πŸ€’ - You have a problem with your .repo-metadata.json file: Result of scan πŸ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'private-catalog' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem πŸ€’ you have a problem with your repo metadata json file result of scan πŸ“ˆ release level must be equal to one of the allowed values in repo metadata json api shortname private catalog invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
194,876
15,442,521,169
IssuesEvent
2021-03-08 07:51:37
AirReps/website-content
https://api.github.com/repos/AirReps/website-content
opened
Typo in frequency graph image
documentation
Typo in AirPod Pro frequency graph. "ANC Comparios" --> "ANC Comparison" https://docs.airreps.info/docs/soundQuality-apple#airpods-pro
1.0
Typo in frequency graph image - Typo in AirPod Pro frequency graph. "ANC Comparios" --> "ANC Comparison" https://docs.airreps.info/docs/soundQuality-apple#airpods-pro
non_process
typo in frequency graph image typo in airpod pro frequency graph anc comparios anc comparison
0
10,924
13,725,077,203
IssuesEvent
2020-10-03 16:55:14
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
opened
bobs electronics and angels industries results in two grey board recipes
Angels Bio Processing Angels Industries Impact: Bug
**Describe the bug** When playing with bobs mods, angels bio processing adds an alternative recipe to obtain wooden boards, however, when angels industries component mode is enabled, these wooden boards get overwritten to grey boards, resulting in two different versions of making grey boards from paper. **Screenshots** ![image](https://user-images.githubusercontent.com/26593477/94997212-f13f4700-05a9-11eb-8353-e41ddb9d7fbf.png) ![image](https://user-images.githubusercontent.com/26593477/94997119-6b230080-05a9-11eb-9ebe-ae63d02dcb18.png) **Additional context** To resolve this, bio processing should only add this alternative wooden board recipe when angels component mode is not active. https://github.com/Arch666Angel/mods/blob/9bf279a59f05b7d485a82881928a72775d1b961d/angelsbioprocessing/prototypes/bio-processing-override.lua#L122-L145
1.0
bobs electronics and angels industries results in two grey board recipes - **Describe the bug** When playing with bobs mods, angels bio processing adds an alternative recipe to obtain wooden boards, however, when angels industries component mode is enabled, these wooden boards get overwritten to grey boards, resulting in two different versions of making grey boards from paper. **Screenshots** ![image](https://user-images.githubusercontent.com/26593477/94997212-f13f4700-05a9-11eb-8353-e41ddb9d7fbf.png) ![image](https://user-images.githubusercontent.com/26593477/94997119-6b230080-05a9-11eb-9ebe-ae63d02dcb18.png) **Additional context** To resolve this, bio processing should only add this alternative wooden board recipe when angels component mode is not active. https://github.com/Arch666Angel/mods/blob/9bf279a59f05b7d485a82881928a72775d1b961d/angelsbioprocessing/prototypes/bio-processing-override.lua#L122-L145
process
bobs electronics and angels industries results in two grey board recipes describe the bug when playing with bobs mods angels bio processing adds an alternative recipe to obtain wooden boards however when angels industries component mode is enabled these wooden boards get overwritten to grey boards resulting in two different versions of making grey boards from paper screenshots additional context to resolve this bio processing should only add this alternative wooden board recipe when angels component mode is not active
1