Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
68,155
21,523,148,303
IssuesEvent
2022-04-28 15:49:22
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
-6 Replies in thread
T-Defect
### Steps to reproduce Started a conversation in a room normally. Didn't appear to do anything specia - possibly another user replied to and deleted a message in another thread. Ended up seeing this: ![Screenshot from 2022-04-28 16-31-20](https://user-images.githubusercontent.com/1917473/165789724-3f011ea0-5308-4709-b245-51214f66c8aa.png) ### Outcome #### What did you expect? I don't think this should have been a thread #### What happened instead? It was a -6 message thread :) ### Operating system Ubuntu ### Application version Version: 1.10.10 ### How did you install the app? _No response_ ### Homeserver EMS ### Will you send logs? Yes
1.0
-6 Replies in thread - ### Steps to reproduce Started a conversation in a room normally. Didn't appear to do anything specia - possibly another user replied to and deleted a message in another thread. Ended up seeing this: ![Screenshot from 2022-04-28 16-31-20](https://user-images.githubusercontent.com/1917473/165789724-3f011ea0-5308-4709-b245-51214f66c8aa.png) ### Outcome #### What did you expect? I don't think this should have been a thread #### What happened instead? It was a -6 message thread :) ### Operating system Ubuntu ### Application version Version: 1.10.10 ### How did you install the app? _No response_ ### Homeserver EMS ### Will you send logs? Yes
defect
replies in thread steps to reproduce started a conversation in a room normally didn t appear to do anything specia possibly another user replied to and deleted a message in another thread ended up seeing this outcome what did you expect i don t think this should have been a thread what happened instead it was a message thread operating system ubuntu application version version how did you install the app no response homeserver ems will you send logs yes
1
102,680
16,578,081,156
IssuesEvent
2021-05-31 08:04:31
AlexRogalskiy/weather-time
https://api.github.com/repos/AlexRogalskiy/weather-time
opened
CVE-2021-33502 (High) detected in normalize-url-5.3.0.tgz
security vulnerability
## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-5.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-5.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-5.3.0.tgz</a></p> <p>Path to dependency file: weather-time/package.json</p> <p>Path to vulnerable library: weather-time/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - npm-7.0.10.tgz (Root Library) - :x: **normalize-url-5.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/weather-time/commit/8ff5450ff6ee1f7920e55168365618ced6a9a732">8ff5450ff6ee1f7920e55168365618ced6a9a732</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33502 (High) detected in normalize-url-5.3.0.tgz - ## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-5.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-5.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-5.3.0.tgz</a></p> <p>Path to dependency file: weather-time/package.json</p> <p>Path to vulnerable library: weather-time/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - npm-7.0.10.tgz (Root Library) - :x: **normalize-url-5.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/weather-time/commit/8ff5450ff6ee1f7920e55168365618ced6a9a732">8ff5450ff6ee1f7920e55168365618ced6a9a732</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in normalize url tgz cve high severity vulnerability vulnerable library normalize url tgz normalize a url library home page a href path to dependency file weather time package json path to vulnerable library weather time node modules normalize url package json dependency hierarchy npm tgz root library x normalize url tgz vulnerable library found in head commit a href vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url step up your open source security game with whitesource
0
16,616
2,920,435,335
IssuesEvent
2015-06-24 18:55:24
ashanbh/chrome-rest-client
https://api.github.com/repos/ashanbh/chrome-rest-client
closed
Payload sometimes doesn't update correctly
auto-migrated Priority-Medium Type-Defect
``` I'm using your REST client to test an API for my code. I had a strange situation where I entered a valid payload and the request failed. I then entered an invalid payload and the request succeeded. I tried this several times, and each time the result was opposite to what I expected. When I examined my API log I could see that the client was sending the opposite of what it appeared to be sending i.e when I edited the payload to be invalid the payload that was actually sent was valid, and vice versa. I solved the problem by changing the payload and then clicking through the payload editor tabs. That seemed to clear the error. It was as if the client was getting stuck. ``` Original issue reported on code.google.com by `Paul.Ne...@gmail.com` on 6 Dec 2012 at 4:19
1.0
Payload sometimes doesn't update correctly - ``` I'm using your REST client to test an API for my code. I had a strange situation where I entered a valid payload and the request failed. I then entered an invalid payload and the request succeeded. I tried this several times, and each time the result was opposite to what I expected. When I examined my API log I could see that the client was sending the opposite of what it appeared to be sending i.e when I edited the payload to be invalid the payload that was actually sent was valid, and vice versa. I solved the problem by changing the payload and then clicking through the payload editor tabs. That seemed to clear the error. It was as if the client was getting stuck. ``` Original issue reported on code.google.com by `Paul.Ne...@gmail.com` on 6 Dec 2012 at 4:19
defect
payload sometimes doesn t update correctly i m using your rest client to test an api for my code i had a strange situation where i entered a valid payload and the request failed i then entered an invalid payload and the request succeeded i tried this several times and each time the result was opposite to what i expected when i examined my api log i could see that the client was sending the opposite of what it appeared to be sending i e when i edited the payload to be invalid the payload that was actually sent was valid and vice versa i solved the problem by changing the payload and then clicking through the payload editor tabs that seemed to clear the error it was as if the client was getting stuck original issue reported on code google com by paul ne gmail com on dec at
1
288,134
31,857,032,880
IssuesEvent
2023-09-15 08:14:02
nidhi7598/linux-4.19.72_CVE-2022-3564
https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564
closed
CVE-2019-19462 (Medium) detected in linuxlinux-4.19.294 - autoclosed
Mend: dependency security vulnerability
## CVE-2019-19462 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/relay.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/relay.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> relay_open in kernel/relay.c in the Linux kernel through 5.4.1 allows local users to cause a denial of service (such as relay blockage) by triggering a NULL alloc_percpu result. <p>Publish Date: 2019-11-30 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19462>CVE-2019-19462</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19462">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19462</a></p> <p>Release Date: 2019-11-30</p> <p>Fix Resolution: v5.8-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-19462 (Medium) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2019-19462 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/relay.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/relay.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> relay_open in kernel/relay.c in the Linux kernel through 5.4.1 allows local users to cause a denial of service (such as relay blockage) by triggering a NULL alloc_percpu result. <p>Publish Date: 2019-11-30 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19462>CVE-2019-19462</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19462">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19462</a></p> <p>Release Date: 2019-11-30</p> <p>Fix Resolution: v5.8-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files kernel relay c kernel relay c vulnerability details relay open in kernel relay c in the linux kernel through allows local users to cause a denial of service such as relay blockage by triggering a null alloc percpu result publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
329,572
28,290,794,481
IssuesEvent
2023-04-09 07:01:17
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix linalg.test_tensorflow_slogdet
TensorFlow Frontend Sub Task Failing Test
| | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_linalg.py::test_tensorflow_slogdet[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-04-05T02:10:21.0056938Z E AssertionError: the results from backend numpy and ground truth framework tensorflow do not match 2023-04-05T02:10:21.0057706Z E -80.40653991699219!=-80.40507507324219 2023-04-05T02:10:21.0057957Z E 2023-04-05T02:10:21.0058172Z E 2023-04-05T02:10:21.0058585Z E Falsifying example: test_tensorflow_slogdet( 2023-04-05T02:10:21.0059730Z E dtype_and_x=(['float32'], [array([[-1.7632415e-38, -6.0185311e-36], 2023-04-05T02:10:21.0060242Z E [-2.0000000e+00, -1.0000000e+00]], dtype=float32)]), 2023-04-05T02:10:21.0060602Z E test_flags=FrontendFunctionTestFlags( 2023-04-05T02:10:21.0060930Z E num_positional_args=0, 2023-04-05T02:10:21.0061211Z E with_out=False, 2023-04-05T02:10:21.0061481Z E inplace=False, 2023-04-05T02:10:21.0061760Z E as_variable=[False], 2023-04-05T02:10:21.0062046Z E native_arrays=[False], 2023-04-05T02:10:21.0062605Z E generate_frontend_arrays=False, 2023-04-05T02:10:21.0062889Z E ), 2023-04-05T02:10:21.0063626Z E fn_tree='ivy.functional.frontends.tensorflow.linalg.slogdet', 2023-04-05T02:10:21.0064074Z E frontend='tensorflow', 2023-04-05T02:10:21.0064387Z E on_device='cpu', 2023-04-05T02:10:21.0064622Z E ) 2023-04-05T02:10:21.0065321Z E 2023-04-05T02:10:21.0066111Z E You can reproduce this example by temporarily adding @reproduce_failure('6.70.2', b'AXicY2BkAALG4xcYwDSY7cAAA4xQmgmJDaYBcXUCqA==') as a decorator on your test case </details>
1.0
Fix linalg.test_tensorflow_slogdet - | | | |---|---| |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/4614073491/jobs/8156699920" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a> <details> <summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_linalg.py::test_tensorflow_slogdet[cpu-ivy.functional.backends.numpy-False-False]</summary> 2023-04-05T02:10:21.0056938Z E AssertionError: the results from backend numpy and ground truth framework tensorflow do not match 2023-04-05T02:10:21.0057706Z E -80.40653991699219!=-80.40507507324219 2023-04-05T02:10:21.0057957Z E 2023-04-05T02:10:21.0058172Z E 2023-04-05T02:10:21.0058585Z E Falsifying example: test_tensorflow_slogdet( 2023-04-05T02:10:21.0059730Z E dtype_and_x=(['float32'], [array([[-1.7632415e-38, -6.0185311e-36], 2023-04-05T02:10:21.0060242Z E [-2.0000000e+00, -1.0000000e+00]], dtype=float32)]), 2023-04-05T02:10:21.0060602Z E test_flags=FrontendFunctionTestFlags( 2023-04-05T02:10:21.0060930Z E num_positional_args=0, 2023-04-05T02:10:21.0061211Z E with_out=False, 2023-04-05T02:10:21.0061481Z E inplace=False, 2023-04-05T02:10:21.0061760Z E as_variable=[False], 2023-04-05T02:10:21.0062046Z E native_arrays=[False], 2023-04-05T02:10:21.0062605Z E generate_frontend_arrays=False, 2023-04-05T02:10:21.0062889Z E ), 2023-04-05T02:10:21.0063626Z E fn_tree='ivy.functional.frontends.tensorflow.linalg.slogdet', 2023-04-05T02:10:21.0064074Z E frontend='tensorflow', 2023-04-05T02:10:21.0064387Z E on_device='cpu', 2023-04-05T02:10:21.0064622Z E ) 2023-04-05T02:10:21.0065321Z E 2023-04-05T02:10:21.0066111Z E You can reproduce this example by temporarily adding @reproduce_failure('6.70.2', b'AXicY2BkAALG4xcYwDSY7cAAA4xQmgmJDaYBcXUCqA==') as a decorator on your test case </details>
non_defect
fix linalg test tensorflow slogdet tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test tensorflow test linalg py test tensorflow slogdet e assertionerror the results from backend numpy and ground truth framework tensorflow do not match e e e e falsifying example test tensorflow slogdet e dtype and x e dtype e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e generate frontend arrays false e e fn tree ivy functional frontends tensorflow linalg slogdet e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
0
71,025
23,415,122,474
IssuesEvent
2022-08-12 23:06:09
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
zfs is reporting dataset as mounted even though mountpoint is set to none
Type: Defect Status: Stale
### System information $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS" Commands to find ZFS/SPL versions: $ modinfo zfs | grep -iw version version: 0.7.12-1ubuntu5 k8s@k8s-node2:~$ modinfo spl | grep -iw version version: 0.7.12-1ubuntu3 ### Describe the problem you're observing zfs is reporting dataset as mounted even though mountpoint is set to none ### Describe how to reproduce the problem I am the developer and maintainer of the zfs-loalpv project (https://github.com/openebs/zfs-localpv). It is a k8s CSI driver which provisions the volumes on a ZFS storage. While doing scale testing, I restarted around 200 pods and out of all, one pod was having issue. ``` $ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mounted yes - zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mountpoint none local zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 canmount on default ``` here I see that mounted as yes but mountpoint is set to none. How is that possible? the system is reporting that it is mounted ``` $ sudo mount | grep pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 on /var/lib/kubelet/pods/3dbd0a03-c7e5-44f1-9f94-407f6ac96316/volumes/kubernetes.io~csi/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765/mount type zfs (rw,xattr,noacl) ``` The ZFS_LocalPV uses `zfs set mountpoint=none` to umount the ZFS dataset. But in the above case, the mountpoint was set to none but still dataset was mounted? When I set the mountpoint to none again for the same volume, the dataset is getting umounted and everything is normal. ``` $ sudo zfs set mountpoint=none zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 k8s@k8s-node2:~$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mounted no - zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mountpoint none local zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 canmount on default ``` Please note that I have ~200 volumes and pods are using them. I deleted all the pods at the same time and only one pod is showing this issue, all other pods are fine. As a part the tremination of the pod the driver will unmount the dataset via zfs set mountpoint=none.
1.0
zfs is reporting dataset as mounted even though mountpoint is set to none - ### System information $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS" Commands to find ZFS/SPL versions: $ modinfo zfs | grep -iw version version: 0.7.12-1ubuntu5 k8s@k8s-node2:~$ modinfo spl | grep -iw version version: 0.7.12-1ubuntu3 ### Describe the problem you're observing zfs is reporting dataset as mounted even though mountpoint is set to none ### Describe how to reproduce the problem I am the developer and maintainer of the zfs-loalpv project (https://github.com/openebs/zfs-localpv). It is a k8s CSI driver which provisions the volumes on a ZFS storage. While doing scale testing, I restarted around 200 pods and out of all, one pod was having issue. ``` $ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mounted yes - zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mountpoint none local zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 canmount on default ``` here I see that mounted as yes but mountpoint is set to none. How is that possible? the system is reporting that it is mounted ``` $ sudo mount | grep pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 on /var/lib/kubelet/pods/3dbd0a03-c7e5-44f1-9f94-407f6ac96316/volumes/kubernetes.io~csi/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765/mount type zfs (rw,xattr,noacl) ``` The ZFS_LocalPV uses `zfs set mountpoint=none` to umount the ZFS dataset. But in the above case, the mountpoint was set to none but still dataset was mounted? When I set the mountpoint to none again for the same volume, the dataset is getting umounted and everything is normal. ``` $ sudo zfs set mountpoint=none zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 k8s@k8s-node2:~$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mounted no - zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 mountpoint none local zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 canmount on default ``` Please note that I have ~200 volumes and pods are using them. I deleted all the pods at the same time and only one pod is showing this issue, all other pods are fine. As a part the tremination of the pod the driver will unmount the dataset via zfs set mountpoint=none.
defect
zfs is reporting dataset as mounted even though mountpoint is set to none system information cat etc lsb release distrib id ubuntu distrib release distrib codename bionic distrib description ubuntu lts commands to find zfs spl versions modinfo zfs grep iw version version modinfo spl grep iw version version describe the problem you re observing zfs is reporting dataset as mounted even though mountpoint is set to none describe how to reproduce the problem i am the developer and maintainer of the zfs loalpv project it is a csi driver which provisions the volumes on a zfs storage while doing scale testing i restarted around pods and out of all one pod was having issue sudo zfs get all zfspv pool pvc grep mount zfspv pool pvc mounted yes zfspv pool pvc mountpoint none local zfspv pool pvc canmount on default here i see that mounted as yes but mountpoint is set to none how is that possible the system is reporting that it is mounted sudo mount grep pvc zfspv pool pvc on var lib kubelet pods volumes kubernetes io csi pvc mount type zfs rw xattr noacl the zfs localpv uses zfs set mountpoint none to umount the zfs dataset but in the above case the mountpoint was set to none but still dataset was mounted when i set the mountpoint to none again for the same volume the dataset is getting umounted and everything is normal sudo zfs set mountpoint none zfspv pool pvc sudo zfs get all zfspv pool pvc grep mount zfspv pool pvc mounted no zfspv pool pvc mountpoint none local zfspv pool pvc canmount on default please note that i have volumes and pods are using them i deleted all the pods at the same time and only one pod is showing this issue all other pods are fine as a part the tremination of the pod the driver will unmount the dataset via zfs set mountpoint none
1
342
2,533,092,322
IssuesEvent
2015-01-23 20:38:55
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
improve accuracy of scipy.stats.rayleigh distribution for large x
defect easy-fix scipy.stats
Currently the sf-isf round trip for large x return wrong values as shown here: In [85]: rayleigh.isf(rayleigh.sf(9,1),1) Out[85]: 9.0000752648062754 By replacing exp(x)-1 with expm1(x) and log(1+x) with log1p(x) one can improve the accuracy of the rayleigh distribution (_cdf, _sf, and _ppf) to machine precision for large x. Replacing _cdf and _ppf with def _cdf(self, r): return - expm1(-r * r / 2.0) def _sf(self, r): return exp(-r * r / 2.0) def _isf(self, q): return sqrt(-2 * log(q)) def _ppf(self, q): return sqrt(-2 * log1p(-q)) one acheives In [86]: rayleigh.isf(rayleigh.sf(9,1),1) Out[86]: 9.0
1.0
improve accuracy of scipy.stats.rayleigh distribution for large x - Currently the sf-isf round trip for large x return wrong values as shown here: In [85]: rayleigh.isf(rayleigh.sf(9,1),1) Out[85]: 9.0000752648062754 By replacing exp(x)-1 with expm1(x) and log(1+x) with log1p(x) one can improve the accuracy of the rayleigh distribution (_cdf, _sf, and _ppf) to machine precision for large x. Replacing _cdf and _ppf with def _cdf(self, r): return - expm1(-r * r / 2.0) def _sf(self, r): return exp(-r * r / 2.0) def _isf(self, q): return sqrt(-2 * log(q)) def _ppf(self, q): return sqrt(-2 * log1p(-q)) one acheives In [86]: rayleigh.isf(rayleigh.sf(9,1),1) Out[86]: 9.0
defect
improve accuracy of scipy stats rayleigh distribution for large x currently the sf isf round trip for large x return wrong values as shown here in rayleigh isf rayleigh sf out by replacing exp x with x and log x with x one can improve the accuracy of the rayleigh distribution cdf sf and ppf to machine precision for large x replacing cdf and ppf with def cdf self r return r r def sf self r return exp r r def isf self q return sqrt log q def ppf self q return sqrt q one acheives in rayleigh isf rayleigh sf out
1
76,747
26,575,799,878
IssuesEvent
2023-01-21 20:02:33
dkfans/keeperfx
https://api.github.com/repos/dkfans/keeperfx
closed
quick_objective in multiplayer shows up just for red
Type-Defect
If you add a QUICK_OBJECTIVE to a multiplayer map, only red gets the ?-button. All players should see it.
1.0
quick_objective in multiplayer shows up just for red - If you add a QUICK_OBJECTIVE to a multiplayer map, only red gets the ?-button. All players should see it.
defect
quick objective in multiplayer shows up just for red if you add a quick objective to a multiplayer map only red gets the button all players should see it
1
538,776
15,778,132,286
IssuesEvent
2021-04-01 07:17:52
ubuntu/yaru
https://api.github.com/repos/ubuntu/yaru
closed
Keep track of upstream adwaita-icon-theme
Area: GitHub Actions Priority: Enhancement
Since Yaru is now syncing with upstream adwaita-gtk and gnome-shell theme, I think it would be a good idea to also keep an eye on upstream adwaita-icon-theme. adwaita-icon-theme mostly contains symbolic and action icons that correspond to code changes in gnome-shell, gnome-control-center and the rest of gnome apps. In every gnome development cycle, adwaita-icon-theme adds, fixes, or enhances its symbolic icons in order to reflect the code changes in the gnome stack. adwaita-icon-theme does not contain app icons since gnome apps ship their own icons. **An example to elaborate:** In the 3.33 development cycle, gnome added new battery icons in order to better represent the battery charge states: https://gitlab.gnome.org/GNOME/adwaita-icon-theme/issues/6 That has resulted in: - 21 new battery icons and moving the legacy battery icons to a legacy folder: https://gitlab.gnome.org/GNOME/adwaita-icon-theme/commit/377044f755c3ec994f661d787f50e633487b929d - Code changes in gnome-shell to use the new icons, if available, and fall back to the existing icon names for compatibility with older icon themes: https://gitlab.gnome.org/GNOME/gnome-shell/commit/bd18313d125aa1873c21b9cce9bbf81a335e48b0 Without reflecting the new change in Yaru¹, gnome-shell would still be falling back to the legacy icons, and not showing a better charge state as a result. ¹ https://github.com/ubuntu/yaru/issues/1482 **Also, a question:** Does an app (or Yaru itself) fall back to adwaita-icon-theme if a requested icon is not found in Yaru? **Here is the gnome-stencils.svg as a reference:** https://gitlab.gnome.org/GNOME/adwaita-icon-theme/blob/master/src/symbolic/gnome-stencils.svg ![grafik](https://user-images.githubusercontent.com/45657128/65373219-ee153980-dc7a-11e9-86d9-679eb8f40c27.png)
1.0
Keep track of upstream adwaita-icon-theme - Since Yaru is now syncing with upstream adwaita-gtk and gnome-shell theme, I think it would be a good idea to also keep an eye on upstream adwaita-icon-theme. adwaita-icon-theme mostly contains symbolic and action icons that correspond to code changes in gnome-shell, gnome-control-center and the rest of gnome apps. In every gnome development cycle, adwaita-icon-theme adds, fixes, or enhances its symbolic icons in order to reflect the code changes in the gnome stack. adwaita-icon-theme does not contain app icons since gnome apps ship their own icons. **An example to elaborate:** In the 3.33 development cycle, gnome added new battery icons in order to better represent the battery charge states: https://gitlab.gnome.org/GNOME/adwaita-icon-theme/issues/6 That has resulted in: - 21 new battery icons and moving the legacy battery icons to a legacy folder: https://gitlab.gnome.org/GNOME/adwaita-icon-theme/commit/377044f755c3ec994f661d787f50e633487b929d - Code changes in gnome-shell to use the new icons, if available, and fall back to the existing icon names for compatibility with older icon themes: https://gitlab.gnome.org/GNOME/gnome-shell/commit/bd18313d125aa1873c21b9cce9bbf81a335e48b0 Without reflecting the new change in Yaru¹, gnome-shell would still be falling back to the legacy icons, and not showing a better charge state as a result. ¹ https://github.com/ubuntu/yaru/issues/1482 **Also, a question:** Does an app (or Yaru itself) fall back to adwaita-icon-theme if a requested icon is not found in Yaru? **Here is the gnome-stencils.svg as a reference:** https://gitlab.gnome.org/GNOME/adwaita-icon-theme/blob/master/src/symbolic/gnome-stencils.svg ![grafik](https://user-images.githubusercontent.com/45657128/65373219-ee153980-dc7a-11e9-86d9-679eb8f40c27.png)
non_defect
keep track of upstream adwaita icon theme since yaru is now syncing with upstream adwaita gtk and gnome shell theme i think it would be a good idea to also keep an eye on upstream adwaita icon theme adwaita icon theme mostly contains symbolic and action icons that correspond to code changes in gnome shell gnome control center and the rest of gnome apps in every gnome development cycle adwaita icon theme adds fixes or enhances its symbolic icons in order to reflect the code changes in the gnome stack adwaita icon theme does not contain app icons since gnome apps ship their own icons an example to elaborate in the development cycle gnome added new battery icons in order to better represent the battery charge states that has resulted in new battery icons and moving the legacy battery icons to a legacy folder code changes in gnome shell to use the new icons if available and fall back to the existing icon names for compatibility with older icon themes without reflecting the new change in yaru¹ gnome shell would still be falling back to the legacy icons and not showing a better charge state as a result ¹ also a question does an app or yaru itself fall back to adwaita icon theme if a requested icon is not found in yaru here is the gnome stencils svg as a reference
0
28,750
5,348,389,292
IssuesEvent
2017-02-18 04:23:27
amitdholiya/vqmod
https://api.github.com/repos/amitdholiya/vqmod
reopened
error
auto-migrated Priority-Medium Type-Defect
``` Hi I have added an add-on on opencart. it shows following error: Fatal error: Uncaught exception 'ErrorException' with message 'Error: Duplicate column name 'global_group_discount'<br />Error No: 1060<br />ALTER TABLE `product` ADD global_group_discount INT(1) NOT NULL DEFAULT '1'' in /home/foxyeve/public_html/system/database/mysqli.php:41 Stack trace: #0 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_library_db.php(20): DBMySQLi->query('ALTER TABLE `pr...') #1 /home/foxyeve/public_html/vqmod/vqcache/vq2-admin_view_template_common_home.tpl( 38): DB->query('ALTER TABLE `pr...') #2 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_controller.php(82): require('/home/foxyeve/p...') #3 /home/foxyeve/public_html/admin/controller/common/home.php(202): Controller->render() #4 [internal function]: ControllerCommonHome->index() #5 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_front.php(42): call_user_func_array(Array, Array) #6 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_front.php(29): Front->execute(Object(Action)) #7 /home/foxyeve/public_html/admin/index.ph in /home/foxyeve/public_html/system/database/mysqli.php on line 41 developer said this issue is because of vQmod is not installed properly, is it?? Even I have added so many add-ons and they are working fine on this. Kindly tell me so that I can connect to developer regarding this issue. ``` Original issue reported on code.google.com by `nitashan...@gmail.com` on 31 Jul 2014 at 6:54
1.0
error - ``` Hi I have added an add-on on opencart. it shows following error: Fatal error: Uncaught exception 'ErrorException' with message 'Error: Duplicate column name 'global_group_discount'<br />Error No: 1060<br />ALTER TABLE `product` ADD global_group_discount INT(1) NOT NULL DEFAULT '1'' in /home/foxyeve/public_html/system/database/mysqli.php:41 Stack trace: #0 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_library_db.php(20): DBMySQLi->query('ALTER TABLE `pr...') #1 /home/foxyeve/public_html/vqmod/vqcache/vq2-admin_view_template_common_home.tpl( 38): DB->query('ALTER TABLE `pr...') #2 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_controller.php(82): require('/home/foxyeve/p...') #3 /home/foxyeve/public_html/admin/controller/common/home.php(202): Controller->render() #4 [internal function]: ControllerCommonHome->index() #5 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_front.php(42): call_user_func_array(Array, Array) #6 /home/foxyeve/public_html/vqmod/vqcache/vq2-system_engine_front.php(29): Front->execute(Object(Action)) #7 /home/foxyeve/public_html/admin/index.ph in /home/foxyeve/public_html/system/database/mysqli.php on line 41 developer said this issue is because of vQmod is not installed properly, is it?? Even I have added so many add-ons and they are working fine on this. Kindly tell me so that I can connect to developer regarding this issue. ``` Original issue reported on code.google.com by `nitashan...@gmail.com` on 31 Jul 2014 at 6:54
defect
error hi i have added an add on on opencart it shows following error fatal error uncaught exception errorexception with message error duplicate column name global group discount error no alter table product add global group discount int not null default in home foxyeve public html system database mysqli php stack trace home foxyeve public html vqmod vqcache system library db php dbmysqli query alter table pr home foxyeve public html vqmod vqcache admin view template common home tpl db query alter table pr home foxyeve public html vqmod vqcache system engine controller php require home foxyeve p home foxyeve public html admin controller common home php controller render controllercommonhome index home foxyeve public html vqmod vqcache system engine front php call user func array array array home foxyeve public html vqmod vqcache system engine front php front execute object action home foxyeve public html admin index ph in home foxyeve public html system database mysqli php on line developer said this issue is because of vqmod is not installed properly is it even i have added so many add ons and they are working fine on this kindly tell me so that i can connect to developer regarding this issue original issue reported on code google com by nitashan gmail com on jul at
1
317
2,525,206,251
IssuesEvent
2015-01-20 22:54:10
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
NearestNodeTransfer Bug
C: MOOSE P: normal T: defect
There's a small logic bug in MultiAppNearestNodeTransfer when transferring "from" a multiapp.
1.0
NearestNodeTransfer Bug - There's a small logic bug in MultiAppNearestNodeTransfer when transferring "from" a multiapp.
defect
nearestnodetransfer bug there s a small logic bug in multiappnearestnodetransfer when transferring from a multiapp
1
57,242
15,727,387,676
IssuesEvent
2021-03-29 12:37:10
danmar/testissues
https://api.github.com/repos/danmar/testissues
opened
1.29 reports 1.28 in help (Trac #146)
Incomplete Migration Migrated from Trac Other defect noone
Migrated from https://trac.cppcheck.net/ticket/146 ```json { "status": "closed", "changetime": "2009-03-08T15:30:27", "description": "\"cppcheck -h\" yields:\n\nCppcheck 1.28 \n \nA tool for static C/C++ code analysis ", "reporter": "boogachamp", "cc": "", "resolution": "fixed", "_ts": "1236526227000000", "component": "Other", "summary": "1.29 reports 1.28 in help", "priority": "", "keywords": "", "time": "2009-03-08T12:41:49", "milestone": "1.30", "owner": "noone", "type": "defect" } ```
1.0
1.29 reports 1.28 in help (Trac #146) - Migrated from https://trac.cppcheck.net/ticket/146 ```json { "status": "closed", "changetime": "2009-03-08T15:30:27", "description": "\"cppcheck -h\" yields:\n\nCppcheck 1.28 \n \nA tool for static C/C++ code analysis ", "reporter": "boogachamp", "cc": "", "resolution": "fixed", "_ts": "1236526227000000", "component": "Other", "summary": "1.29 reports 1.28 in help", "priority": "", "keywords": "", "time": "2009-03-08T12:41:49", "milestone": "1.30", "owner": "noone", "type": "defect" } ```
defect
reports in help trac migrated from json status closed changetime description cppcheck h yields n ncppcheck n na tool for static c c code analysis reporter boogachamp cc resolution fixed ts component other summary reports in help priority keywords time milestone owner noone type defect
1
788,683
27,761,646,848
IssuesEvent
2023-03-16 08:43:58
rancher/dashboard
https://api.github.com/repos/rancher/dashboard
closed
Performance: Make API requests from within the web worker (step 3)
[zube]: Review priority/0 status/dev-validate kind/enhancement area/performance QA/None
Placeholder, to fill out **Is your feature request related to a problem? Please describe.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> **Describe the solution you'd like** - Make requests to the cluster API from with the web worker **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** - This is change number 3 of 4 ending in https://github.com/rancher/dashboard/issues/6541 - Step 1 is https://github.com/rancher/dashboard/issues/7894 - Step 2 is https://github.com/rancher/dashboard/issues/7895
1.0
Performance: Make API requests from within the web worker (step 3) - Placeholder, to fill out **Is your feature request related to a problem? Please describe.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> **Describe the solution you'd like** - Make requests to the cluster API from with the web worker **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** - This is change number 3 of 4 ending in https://github.com/rancher/dashboard/issues/6541 - Step 1 is https://github.com/rancher/dashboard/issues/7894 - Step 2 is https://github.com/rancher/dashboard/issues/7895
non_defect
performance make api requests from within the web worker step placeholder to fill out is your feature request related to a problem please describe describe the solution you d like make requests to the cluster api from with the web worker describe alternatives you ve considered additional context this is change number of ending in step is step is
0
11,547
2,657,950,676
IssuesEvent
2015-03-18 12:57:14
mondain/red5
https://api.github.com/repos/mondain/red5
closed
netstream.send('@setDataFrame','onCuePoint', param) failed to write CuePoint
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.netstream.send('@setDataFrame','onCuePoint', param) What is the expected output? What do you see instead? write CuePoint to flv file. failed to write CuePoint What version of the product are you using? On what operating system? 0.9.1 final and 1.0 final windows Please provide any additional information below. ``` Original issue reported on code.google.com by `yf_r...@live.cn` on 13 Jan 2013 at 10:11
1.0
netstream.send('@setDataFrame','onCuePoint', param) failed to write CuePoint - ``` What steps will reproduce the problem? 1.netstream.send('@setDataFrame','onCuePoint', param) What is the expected output? What do you see instead? write CuePoint to flv file. failed to write CuePoint What version of the product are you using? On what operating system? 0.9.1 final and 1.0 final windows Please provide any additional information below. ``` Original issue reported on code.google.com by `yf_r...@live.cn` on 13 Jan 2013 at 10:11
defect
netstream send setdataframe oncuepoint param failed to write cuepoint what steps will reproduce the problem netstream send setdataframe oncuepoint param what is the expected output what do you see instead write cuepoint to flv file failed to write cuepoint what version of the product are you using on what operating system final and final windows please provide any additional information below original issue reported on code google com by yf r live cn on jan at
1
31,299
6,493,308,594
IssuesEvent
2017-08-21 16:29:05
buildo/react-components
https://api.github.com/repos/buildo/react-components
closed
Icon: should pass fontSize directly to paths
defect waiting for merge
## description `fontSize` style is passed to the the `<i />` but not to its `.path` children -> it can easily be overwritten by some extraneous CSS selector (as it's happening on AlinityPRO) ## how to reproduce - add this CSS: `span { font-size: 10px }` - pass `style: { fontSize: 15 }` - the icon is 10px 💥 ## specs add explicit `font-size: inherit` to paths in the SASS ## misc {optional: other useful info}
1.0
Icon: should pass fontSize directly to paths - ## description `fontSize` style is passed to the the `<i />` but not to its `.path` children -> it can easily be overwritten by some extraneous CSS selector (as it's happening on AlinityPRO) ## how to reproduce - add this CSS: `span { font-size: 10px }` - pass `style: { fontSize: 15 }` - the icon is 10px 💥 ## specs add explicit `font-size: inherit` to paths in the SASS ## misc {optional: other useful info}
defect
icon should pass fontsize directly to paths description fontsize style is passed to the the but not to its path children it can easily be overwritten by some extraneous css selector as it s happening on alinitypro how to reproduce add this css span font size pass style fontsize the icon is 💥 specs add explicit font size inherit to paths in the sass misc optional other useful info
1
75,755
26,029,654,889
IssuesEvent
2022-12-21 19:47:15
ontop/ontop
https://api.github.com/repos/ontop/ontop
closed
Rdb2RdfTest Problem with Querying Postgresql with many-to-many relations (INT treated as TEXT)
type: defect topic: bootstrapping status: fixed
<!-- Do you want to ask a question? Are you looking for support? We have also a mailing list https://groups.google.com/d/forum/ontop4obda Have a look at our guidelines on how to submit a bug report https://ontop-vkg.org/community/contributing/bug-report --> ### Description I am running the [RDB2RDFTest.java](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/java/RDB2RDFTest.java) tests on different DBMS to validate functionality. In this particular case, I'm running against docker `postgresql:13` (it also is reproducible with Ontop's `ontop/ontop-pgsql` docker image, based on `postgres:9` ). I've revised the H2-based test case to use Postgresql: [RDB2RDFTestPostgres.java.txt](https://github.com/ontop/ontop/files/9823009/RDB2RDFTestPostgres.java.txt) https://github.com/thomasjtaylor/ontop/commit/259ba1d258b4436ea01c49356bc0a5e13a60382a Several of the Rdb2Rdf tests fail for various reasons (e.g., problems comparing XSD.DOUBLE, timezones, bnodes, etc.). For reference, `dg` prefixes refer to rdb2rdf direct mapping tests, while `tc` prefixes` refer to custom r2rml mappings. ``` FAILED 29 ["tc0003a", "dg0005", "dg0005-modified", "tc0005a", "tc0005a-modified", "tc0005b", "tc0005b-modified", "dg0011", "dg0012", "dg0012-modified", "tc0012a", "tc0012a-modified", "tc0012e", "tc0012e-modified", "dg0014", "dg0016", "tc0016a", "tc0016b", "tc0016b-modified", "tc0016c", "tc0016d", "tc0016e", "dg0018", "tc0019a", "dg0021", "dg0022", "dg0023", "dg0024", "dg0025"] ``` **JUnit Test Results:** [RDB2RDFTestPostgres-all-tests-20221019-140353.xml.txt](https://github.com/ontop/ontop/files/9823121/RDB2RDFTestPostgres-all-tests-20221019-140353.xml.txt) For this issue, there is a problem with `INTEGER` columns in complex foreign keys. `"dg0011","dg0014","dg0021","dg022","dg0023","dg0024","dg0025"` **JUnit Test Results:** [RDB2RDFTestPostgres-foreignkeys-20221019-141052.xml.txt](https://github.com/ontop/ontop/files/9823124/RDB2RDFTestPostgres-foreignkeys-20221019-141052.xml.txt) The simplest test appears to be `D0011: Database with many to many relations`: [D0011/create.sql](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/resources/D011/create.sql) ``` CREATE TABLE "Student" ( "ID" integer PRIMARY KEY, "FirstName" varchar(50), "LastName" varchar(50) ); CREATE TABLE "Sport" ( "ID" integer PRIMARY KEY, "Description" varchar(50) ); CREATE TABLE "Student_Sport" ( "ID_Student" integer, "ID_Sport" integer, PRIMARY KEY ("ID_Student","ID_Sport"), FOREIGN KEY ("ID_Student") REFERENCES "Student"("ID"), FOREIGN KEY ("ID_Sport") REFERENCES "Sport"("ID") ); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (10,'Venus', 'Williams'); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (11,'Fernando', 'Alonso'); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (12,'David', 'Villa'); INSERT INTO "Sport" ("ID", "Description") VALUES (110,'Tennis'); INSERT INTO "Sport" ("ID", "Description") VALUES (111,'Football'); INSERT INTO "Sport" ("ID", "Description") VALUES (112,'Formula1'); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (10,110); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (11,111); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (11,112); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (12,111); ``` Running the JUnit Test results in the following stack trace: ``` 4:11:07.301 [Thread-7] ERROR i.u.i.o.a.c.impl.QuestStatement - ERROR: function replace(integer, unknown, unknown) does not exist Hint: No function matches the given name and argument types. You might need to add explicit type casts. Position: 475 it.unibz.inf.ontop.exception.OntopQueryEvaluationException: ERROR: function replace(integer, unknown, unknown) does not exist Hint: No function matches the given name and argument types. You might need to add explicit type casts. Position: 475 at it.unibz.inf.ontop.answering.connection.impl.SQLQuestStatement.executeConstructQuery(SQLQuestStatement.java:234) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement.executeConstructQuery(QuestStatement.java:152) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement.executeConstructQuery(QuestStatement.java:144) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement$QueryExecutionThread.run(QuestStatement.java:97) ``` Turning on debug logging, shows that ultimately the `INTEGER` `Student.ID` is being assumed to be `TEXT`: [RDB2RDFTestPostgres-D0011-debug.txt](https://github.com/ontop/ontop/files/9823211/RDB2RDFTestPostgres-D0011-debug.txt) ``` SELECT ('http://example.com/base/Student/ID=' || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(v1."ID", '%', '%25'), ' ', '%20'), '!', '%21'), '"', '%22'), '#', '%23'), '$', '%24'), '&', '%26'), '''', '%27'), '(', '%28'), ')', '%29'), '*', '%2A'), '+', '%2B'), ',', '%2C'), '/', '%2F'), ':', '%3A'), ';', '%3B'), '<', '%3C'), '=', '%3D'), '>', '%3E'), '?', '%3F'), '@', '%40'), '[', '%5B'), '\', '%5C'), ']', '%5D'), '^', '%5E'), '`', '%60'), '{', '%7B'), '|', '%7C'), '}', '%7D')) AS "v10", 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS "v19", 'http://example.com/base/Student' AS "v38", 0 AS "v8" FROM "Student" v1 ``` If I try to re-run this query fragment directly in `psql`, I get the same error. ### Potential Workarounds However, I can get this to work in any one of the following ways: 1. Use Postgres `::text` conversion `REPLACE(v1."ID"::text, '%', '%25')` 2. Use `CAST` `REPLACE(CAST(v1."ID" AS TEXT), '%', '%25')` 3. Remove the `REPLACE` operators. `v1."ID"` ``` SELECT ('http://example.com/base/Student/ID=' || v1."ID") AS "v10", 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS "v19", 'http://example.com/base/Student' AS "v38", 0 AS "v8" FROM "Student" v1 ``` 4. Remove the `FOREIGN KEY` statements from the `Student_Sport` table. ``` CREATE TABLE "Student_Sport" ( "ID_Student" integer, "ID_Sport" integer, PRIMARY KEY ("ID_Student","ID_Sport"), --FOREIGN KEY ("ID_Student") REFERENCES "Student"("ID"), --FOREIGN KEY ("ID_Sport") REFERENCES "Sport"("ID") ); ``` Perhaps the `INTEGERToTEXT` operation could be updated to fix this issue? ### Steps to Reproduce 1. Start Postgres Database: `docker run --name ontop_postgres_running -p 7777:5432 -e POSTGRES_PASSWORD=postgres2 -d ontop/ontop-pgsql` (Note: the ontop-pgsql database takes a long time to start/initialize, prefer postgres:13) 2. Make sure Postgres JDBC Driver is on classpath (default RDB2RDFTest only includes H2) 3. Run the `RDB2RDFTestPostgrs.java` Unit Test **Expected behavior:** Rdb2Rdf Test Succeeds [D011/directGraph.ttl](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/resources/D011/directGraph.ttl) **Actual behavior:** Rdb2Rdf Test Fails **Reproduces how often:** Always ### Versions Ontop: `4.3.0-SNAPSHOT` (source); also occurs with Ontop `4.2.1` (maven central) Postgresql: docker `postgres:13`; also occurs with `ontop/ontop-pgsql` Postgres Driver: `42.5.0`; (also occurs with earlier drivers) ### Additional Information I've committed this test case to my fork: https://github.com/thomasjtaylor/ontop/commit/259ba1d258b4436ea01c49356bc0a5e13a60382a
1.0
Rdb2RdfTest Problem with Querying Postgresql with many-to-many relations (INT treated as TEXT) - <!-- Do you want to ask a question? Are you looking for support? We have also a mailing list https://groups.google.com/d/forum/ontop4obda Have a look at our guidelines on how to submit a bug report https://ontop-vkg.org/community/contributing/bug-report --> ### Description I am running the [RDB2RDFTest.java](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/java/RDB2RDFTest.java) tests on different DBMS to validate functionality. In this particular case, I'm running against docker `postgresql:13` (it also is reproducible with Ontop's `ontop/ontop-pgsql` docker image, based on `postgres:9` ). I've revised the H2-based test case to use Postgresql: [RDB2RDFTestPostgres.java.txt](https://github.com/ontop/ontop/files/9823009/RDB2RDFTestPostgres.java.txt) https://github.com/thomasjtaylor/ontop/commit/259ba1d258b4436ea01c49356bc0a5e13a60382a Several of the Rdb2Rdf tests fail for various reasons (e.g., problems comparing XSD.DOUBLE, timezones, bnodes, etc.). For reference, `dg` prefixes refer to rdb2rdf direct mapping tests, while `tc` prefixes` refer to custom r2rml mappings. ``` FAILED 29 ["tc0003a", "dg0005", "dg0005-modified", "tc0005a", "tc0005a-modified", "tc0005b", "tc0005b-modified", "dg0011", "dg0012", "dg0012-modified", "tc0012a", "tc0012a-modified", "tc0012e", "tc0012e-modified", "dg0014", "dg0016", "tc0016a", "tc0016b", "tc0016b-modified", "tc0016c", "tc0016d", "tc0016e", "dg0018", "tc0019a", "dg0021", "dg0022", "dg0023", "dg0024", "dg0025"] ``` **JUnit Test Results:** [RDB2RDFTestPostgres-all-tests-20221019-140353.xml.txt](https://github.com/ontop/ontop/files/9823121/RDB2RDFTestPostgres-all-tests-20221019-140353.xml.txt) For this issue, there is a problem with `INTEGER` columns in complex foreign keys. `"dg0011","dg0014","dg0021","dg022","dg0023","dg0024","dg0025"` **JUnit Test Results:** [RDB2RDFTestPostgres-foreignkeys-20221019-141052.xml.txt](https://github.com/ontop/ontop/files/9823124/RDB2RDFTestPostgres-foreignkeys-20221019-141052.xml.txt) The simplest test appears to be `D0011: Database with many to many relations`: [D0011/create.sql](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/resources/D011/create.sql) ``` CREATE TABLE "Student" ( "ID" integer PRIMARY KEY, "FirstName" varchar(50), "LastName" varchar(50) ); CREATE TABLE "Sport" ( "ID" integer PRIMARY KEY, "Description" varchar(50) ); CREATE TABLE "Student_Sport" ( "ID_Student" integer, "ID_Sport" integer, PRIMARY KEY ("ID_Student","ID_Sport"), FOREIGN KEY ("ID_Student") REFERENCES "Student"("ID"), FOREIGN KEY ("ID_Sport") REFERENCES "Sport"("ID") ); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (10,'Venus', 'Williams'); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (11,'Fernando', 'Alonso'); INSERT INTO "Student" ("ID","FirstName","LastName") VALUES (12,'David', 'Villa'); INSERT INTO "Sport" ("ID", "Description") VALUES (110,'Tennis'); INSERT INTO "Sport" ("ID", "Description") VALUES (111,'Football'); INSERT INTO "Sport" ("ID", "Description") VALUES (112,'Formula1'); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (10,110); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (11,111); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (11,112); INSERT INTO "Student_Sport" ("ID_Student", "ID_Sport") VALUES (12,111); ``` Running the JUnit Test results in the following stack trace: ``` 4:11:07.301 [Thread-7] ERROR i.u.i.o.a.c.impl.QuestStatement - ERROR: function replace(integer, unknown, unknown) does not exist Hint: No function matches the given name and argument types. You might need to add explicit type casts. Position: 475 it.unibz.inf.ontop.exception.OntopQueryEvaluationException: ERROR: function replace(integer, unknown, unknown) does not exist Hint: No function matches the given name and argument types. You might need to add explicit type casts. Position: 475 at it.unibz.inf.ontop.answering.connection.impl.SQLQuestStatement.executeConstructQuery(SQLQuestStatement.java:234) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement.executeConstructQuery(QuestStatement.java:152) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement.executeConstructQuery(QuestStatement.java:144) at it.unibz.inf.ontop.answering.connection.impl.QuestStatement$QueryExecutionThread.run(QuestStatement.java:97) ``` Turning on debug logging, shows that ultimately the `INTEGER` `Student.ID` is being assumed to be `TEXT`: [RDB2RDFTestPostgres-D0011-debug.txt](https://github.com/ontop/ontop/files/9823211/RDB2RDFTestPostgres-D0011-debug.txt) ``` SELECT ('http://example.com/base/Student/ID=' || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(v1."ID", '%', '%25'), ' ', '%20'), '!', '%21'), '"', '%22'), '#', '%23'), '$', '%24'), '&', '%26'), '''', '%27'), '(', '%28'), ')', '%29'), '*', '%2A'), '+', '%2B'), ',', '%2C'), '/', '%2F'), ':', '%3A'), ';', '%3B'), '<', '%3C'), '=', '%3D'), '>', '%3E'), '?', '%3F'), '@', '%40'), '[', '%5B'), '\', '%5C'), ']', '%5D'), '^', '%5E'), '`', '%60'), '{', '%7B'), '|', '%7C'), '}', '%7D')) AS "v10", 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS "v19", 'http://example.com/base/Student' AS "v38", 0 AS "v8" FROM "Student" v1 ``` If I try to re-run this query fragment directly in `psql`, I get the same error. ### Potential Workarounds However, I can get this to work in any one of the following ways: 1. Use Postgres `::text` conversion `REPLACE(v1."ID"::text, '%', '%25')` 2. Use `CAST` `REPLACE(CAST(v1."ID" AS TEXT), '%', '%25')` 3. Remove the `REPLACE` operators. `v1."ID"` ``` SELECT ('http://example.com/base/Student/ID=' || v1."ID") AS "v10", 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AS "v19", 'http://example.com/base/Student' AS "v38", 0 AS "v8" FROM "Student" v1 ``` 4. Remove the `FOREIGN KEY` statements from the `Student_Sport` table. ``` CREATE TABLE "Student_Sport" ( "ID_Student" integer, "ID_Sport" integer, PRIMARY KEY ("ID_Student","ID_Sport"), --FOREIGN KEY ("ID_Student") REFERENCES "Student"("ID"), --FOREIGN KEY ("ID_Sport") REFERENCES "Sport"("ID") ); ``` Perhaps the `INTEGERToTEXT` operation could be updated to fix this issue? ### Steps to Reproduce 1. Start Postgres Database: `docker run --name ontop_postgres_running -p 7777:5432 -e POSTGRES_PASSWORD=postgres2 -d ontop/ontop-pgsql` (Note: the ontop-pgsql database takes a long time to start/initialize, prefer postgres:13) 2. Make sure Postgres JDBC Driver is on classpath (default RDB2RDFTest only includes H2) 3. Run the `RDB2RDFTestPostgrs.java` Unit Test **Expected behavior:** Rdb2Rdf Test Succeeds [D011/directGraph.ttl](https://github.com/ontop/ontop/blob/c3baf4b1ff34b52d2423c86ddc48c0f3569a1c3f/test/rdb2rdf-compliance/src/test/resources/D011/directGraph.ttl) **Actual behavior:** Rdb2Rdf Test Fails **Reproduces how often:** Always ### Versions Ontop: `4.3.0-SNAPSHOT` (source); also occurs with Ontop `4.2.1` (maven central) Postgresql: docker `postgres:13`; also occurs with `ontop/ontop-pgsql` Postgres Driver: `42.5.0`; (also occurs with earlier drivers) ### Additional Information I've committed this test case to my fork: https://github.com/thomasjtaylor/ontop/commit/259ba1d258b4436ea01c49356bc0a5e13a60382a
defect
problem with querying postgresql with many to many relations int treated as text do you want to ask a question are you looking for support we have also a mailing list have a look at our guidelines on how to submit a bug report description i am running the tests on different dbms to validate functionality in this particular case i m running against docker postgresql it also is reproducible with ontop s ontop ontop pgsql docker image based on postgres i ve revised the based test case to use postgresql several of the tests fail for various reasons e g problems comparing xsd double timezones bnodes etc for reference dg prefixes refer to direct mapping tests while tc prefixes refer to custom mappings failed modified modified modified modified modified modified modified junit test results for this issue there is a problem with integer columns in complex foreign keys junit test results the simplest test appears to be database with many to many relations create table student id integer primary key firstname varchar lastname varchar create table sport id integer primary key description varchar create table student sport id student integer id sport integer primary key id student id sport foreign key id student references student id foreign key id sport references sport id insert into student id firstname lastname values venus williams insert into student id firstname lastname values fernando alonso insert into student id firstname lastname values david villa insert into sport id description values tennis insert into sport id description values football insert into sport id description values insert into student sport id student id sport values insert into student sport id student id sport values insert into student sport id student id sport values insert into student sport id student id sport values running the junit test results in the following stack trace error i u i o a c impl queststatement error function replace integer unknown unknown does not exist hint no function matches the given name and argument types you might need to add explicit type casts position it unibz inf ontop exception ontopqueryevaluationexception error function replace integer unknown unknown does not exist hint no function matches the given name and argument types you might need to add explicit type casts position at it unibz inf ontop answering connection impl sqlqueststatement executeconstructquery sqlqueststatement java at it unibz inf ontop answering connection impl queststatement executeconstructquery queststatement java at it unibz inf ontop answering connection impl queststatement executeconstructquery queststatement java at it unibz inf ontop answering connection impl queststatement queryexecutionthread run queststatement java turning on debug logging shows that ultimately the integer student id is being assumed to be text select replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace replace id as as as as from student if i try to re run this query fragment directly in psql i get the same error potential workarounds however i can get this to work in any one of the following ways use postgres text conversion replace id text use cast replace cast id as text remove the replace operators id select id as as as as from student remove the foreign key statements from the student sport table create table student sport id student integer id sport integer primary key id student id sport foreign key id student references student id foreign key id sport references sport id perhaps the integertotext operation could be updated to fix this issue steps to reproduce start postgres database docker run name ontop postgres running p e postgres password d ontop ontop pgsql note the ontop pgsql database takes a long time to start initialize prefer postgres make sure postgres jdbc driver is on classpath default only includes run the java unit test expected behavior test succeeds actual behavior test fails reproduces how often always versions ontop snapshot source also occurs with ontop maven central postgresql docker postgres also occurs with ontop ontop pgsql postgres driver also occurs with earlier drivers additional information i ve committed this test case to my fork
1
342,462
30,623,419,603
IssuesEvent
2023-07-24 09:48:13
giantswarm/roadmap
https://api.github.com/repos/giantswarm/roadmap
closed
Regularly validate CNCF conformance of CAPI clusters
topic/testing team/tinkerers
Acceptance criteria: - [ ] CNCF tests are being regularly ran for all providers against latest state of cluster and default apps app - [ ] Teams can easily find and access results - [ ] There is a notification mechanism for when tests fail
1.0
Regularly validate CNCF conformance of CAPI clusters - Acceptance criteria: - [ ] CNCF tests are being regularly ran for all providers against latest state of cluster and default apps app - [ ] Teams can easily find and access results - [ ] There is a notification mechanism for when tests fail
non_defect
regularly validate cncf conformance of capi clusters acceptance criteria cncf tests are being regularly ran for all providers against latest state of cluster and default apps app teams can easily find and access results there is a notification mechanism for when tests fail
0
31,708
4,286,986,308
IssuesEvent
2016-07-16 13:03:29
StockSharp/StockSharp
https://api.github.com/repos/StockSharp/StockSharp
closed
Missing projects from solution
by design
Hi, I am trying to use Samples/Testing/SampleHistoryTestingsample from 4.3.14.1 but the stocksharp project is not complete: for example projects like Algo.HistoryPublic or Xaml or Xaml.ChartingPublic are missing. I noticed that all the missing projects are present in the 'Terminal' branch of stocksharp. Which is the correct way to go? Use Terrminal branch as starting point and update the files with the ones from the latest projects? Thanks
1.0
Missing projects from solution - Hi, I am trying to use Samples/Testing/SampleHistoryTestingsample from 4.3.14.1 but the stocksharp project is not complete: for example projects like Algo.HistoryPublic or Xaml or Xaml.ChartingPublic are missing. I noticed that all the missing projects are present in the 'Terminal' branch of stocksharp. Which is the correct way to go? Use Terrminal branch as starting point and update the files with the ones from the latest projects? Thanks
non_defect
missing projects from solution hi i am trying to use samples testing samplehistorytestingsample from but the stocksharp project is not complete for example projects like algo historypublic or xaml or xaml chartingpublic are missing i noticed that all the missing projects are present in the terminal branch of stocksharp which is the correct way to go use terrminal branch as starting point and update the files with the ones from the latest projects thanks
0
101,134
8,779,930,959
IssuesEvent
2018-12-19 16:00:28
EasyRPG/Player
https://api.github.com/repos/EasyRPG/Player
closed
Showing last Picture with different transparency colour property does not apply current setting
Patch available Testcase available
When you show a particular Picture with certain transparency colour setting, then later, you show the same one with different transparency colour property, it doesn't apply that setting and instead it uses the last image transparency colour property and so until you load a different Picture. **This bug happens in:** EasyRPG Player Nightly (as Dec 16th). EasyRPG Player 0.5.4 **Test case:** I've made a test case, as follows: https://drive.google.com/file/d/1CYas3dx7ZGVWyOcW8sTsjaKo7aBZzg4h/view?usp=sharing ![image](https://user-images.githubusercontent.com/1494687/50062645-f3db6d00-0186-11e9-82f6-88bcf0f61558.png) **The test case is the lower side of the map, labeled "2" Ingore "1"** (\*). Web: https://easyrpg.org/play/master/?game=issue-1575&engine=rpg2k3e There's a row of events. - The first event shows a Picture with transparency colour enabled. - The second event shows the same picture, with transparency colour disabled. - The third event deletes the image. - The fourth image shows a different image than the first two events. For testing it, talk with event 1 and 2 and you will notice the problem. If you need to change the image for refresh, you can use event 4. **Expected behaivour (RPG_RT):** The Picture will show with transparency colour enabled / disabled. **Current behaivour:** The Picture shows with the last transparency color setted, without changing it until another picture loads. (\*) The other side of the map if for another issue. That's beacuse I discovered this issue when I was making that another the test case for another one: #1575
1.0
Showing last Picture with different transparency colour property does not apply current setting - When you show a particular Picture with certain transparency colour setting, then later, you show the same one with different transparency colour property, it doesn't apply that setting and instead it uses the last image transparency colour property and so until you load a different Picture. **This bug happens in:** EasyRPG Player Nightly (as Dec 16th). EasyRPG Player 0.5.4 **Test case:** I've made a test case, as follows: https://drive.google.com/file/d/1CYas3dx7ZGVWyOcW8sTsjaKo7aBZzg4h/view?usp=sharing ![image](https://user-images.githubusercontent.com/1494687/50062645-f3db6d00-0186-11e9-82f6-88bcf0f61558.png) **The test case is the lower side of the map, labeled "2" Ingore "1"** (\*). Web: https://easyrpg.org/play/master/?game=issue-1575&engine=rpg2k3e There's a row of events. - The first event shows a Picture with transparency colour enabled. - The second event shows the same picture, with transparency colour disabled. - The third event deletes the image. - The fourth image shows a different image than the first two events. For testing it, talk with event 1 and 2 and you will notice the problem. If you need to change the image for refresh, you can use event 4. **Expected behaivour (RPG_RT):** The Picture will show with transparency colour enabled / disabled. **Current behaivour:** The Picture shows with the last transparency color setted, without changing it until another picture loads. (\*) The other side of the map if for another issue. That's beacuse I discovered this issue when I was making that another the test case for another one: #1575
non_defect
showing last picture with different transparency colour property does not apply current setting when you show a particular picture with certain transparency colour setting then later you show the same one with different transparency colour property it doesn t apply that setting and instead it uses the last image transparency colour property and so until you load a different picture this bug happens in easyrpg player nightly as dec easyrpg player test case i ve made a test case as follows the test case is the lower side of the map labeled ingore web there s a row of events the first event shows a picture with transparency colour enabled the second event shows the same picture with transparency colour disabled the third event deletes the image the fourth image shows a different image than the first two events for testing it talk with event and and you will notice the problem if you need to change the image for refresh you can use event expected behaivour rpg rt the picture will show with transparency colour enabled disabled current behaivour the picture shows with the last transparency color setted without changing it until another picture loads the other side of the map if for another issue that s beacuse i discovered this issue when i was making that another the test case for another one
0
354,958
10,575,079,383
IssuesEvent
2019-10-07 15:06:29
threefoldfoundation/tfchain
https://api.github.com/repos/threefoldfoundation/tfchain
closed
tfchaind stops syncing at block height 277851
priority_major type_bug
from the consensuslog: `2019/08/02 11:21:45.382466 diffs.go:166: WARN: block 01c243ea7d996f898d2e01d4f1417fe2751c340be33f3a7112d720e7ac633c58 cannot be applied: tx 4fb1c2ffc57abb7e6b29f778b5deb8b59418005d6ee526c4d15b394e3c66b1fb is invalid: transaction does not specify any miner fees` from the client: ``` $ ./tfchainc Synced: No Height: 277851 Progress (estimated): 82.17% ``` I deleted the complete `standard` folder and restarted, same problem commit: b297811d30f233b36d578c553aa18f5b08884eb2
1.0
tfchaind stops syncing at block height 277851 - from the consensuslog: `2019/08/02 11:21:45.382466 diffs.go:166: WARN: block 01c243ea7d996f898d2e01d4f1417fe2751c340be33f3a7112d720e7ac633c58 cannot be applied: tx 4fb1c2ffc57abb7e6b29f778b5deb8b59418005d6ee526c4d15b394e3c66b1fb is invalid: transaction does not specify any miner fees` from the client: ``` $ ./tfchainc Synced: No Height: 277851 Progress (estimated): 82.17% ``` I deleted the complete `standard` folder and restarted, same problem commit: b297811d30f233b36d578c553aa18f5b08884eb2
non_defect
tfchaind stops syncing at block height from the consensuslog diffs go warn block cannot be applied tx is invalid transaction does not specify any miner fees from the client tfchainc synced no height progress estimated i deleted the complete standard folder and restarted same problem commit
0
19,558
3,226,125,437
IssuesEvent
2015-10-10 01:57:22
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
opened
dump info missing "parent" pointers for closures
Area-Dart2JS Type-Defect
dump info missing "parent" pointers for closures, so it's not easy to walk up the tree to figure out what library they belong to.
1.0
dump info missing "parent" pointers for closures - dump info missing "parent" pointers for closures, so it's not easy to walk up the tree to figure out what library they belong to.
defect
dump info missing parent pointers for closures dump info missing parent pointers for closures so it s not easy to walk up the tree to figure out what library they belong to
1
22,514
3,662,781,853
IssuesEvent
2016-02-19 01:05:56
bowdidge/switchlist
https://api.github.com/repos/bowdidge/switchlist
closed
Trains with long list of car types overflows unattractively
auto-migrated Priority-Medium Type-Defect
``` Create a layout with many car types, and create a train that uses most but not all. The car type field overflows. We ought to either truncate, add an explicit "and 14 more car types", or do something else more attractive. ``` ----- Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 27 Jul 2014 at 4:51
1.0
Trains with long list of car types overflows unattractively - ``` Create a layout with many car types, and create a train that uses most but not all. The car type field overflows. We ought to either truncate, add an explicit "and 14 more car types", or do something else more attractive. ``` ----- Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 27 Jul 2014 at 4:51
defect
trains with long list of car types overflows unattractively create a layout with many car types and create a train that uses most but not all the car type field overflows we ought to either truncate add an explicit and more car types or do something else more attractive original issue reported on code google com by rwbowdi gmail com on jul at
1
72,611
24,198,690,546
IssuesEvent
2022-09-24 08:29:46
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288
Type: Defect Status: Stale Status: Triage Needed
### System information Proxmox | 6.3-6/2184247e --- | --- Distribution Name | Proxmox Distribution Version | 6.3-6/2184247e Linux Kernel | 5.4.106-1-pve Architecture | x86_64 ZFS Version | 2.0.4-pve1 SPL Version | 2.0.4-pve1 ### Describe the problem you're observing System partially stop responding during normal workload (periodic snapshots, send/receive). Kernel reports PANIC but system all local FS (on ZFS) operation hangs. So all services already in RAM are working, but I cannot remotely login using SSH (because it wants to access FS). **Now node is running (all services migrated from this node). So I can do any tests / experiments. I'm ready for your suggestion how to clean up this error.** Questions: * It looks for permanent inconsistency on ZFS (I saw this error also one month before, but after this I put ECC RAM to computer and I thought problem gone). How to find what file / volume is connected with blk number shown in panic ? * There is no problem on HDDs (scanned for bad blocks, SMART also doesn't report anything). * Why kernel stuck and cannot reboot automatically even if kernel cmdline `panic=30` is set? * Can be related to ZIL and CACHE on NVM? ``` config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-ST2000VN004-2E4164_Z529HN5G-part3 ONLINE 0 0 0 ata-ST2000VN004-2E4164_Z529KW0A-part3 ONLINE 0 0 0 logs a5005d25-97e3-4541-a60b-16384c03a7bc ONLINE 0 0 0 cache nvme0n1p5 ONLINE 0 0 0 ``` ### Describe how to reproduce the problem ``` # zdb -bcsvL rpool ... zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d103> DVA[0]=<0:188145da000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26fe304c267:9830389637801:1918c2a69fc3597b:47d95a551770185 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d108> DVA[0]=<0:188145e0000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=273614ce481:9c98ab35fb26a:1a562f4282dd3e4c:289de3f37292b2a6 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d106> DVA[0]=<0:188145e2000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26ab83c199c:9d04818f3c9fa:1a7fa0dfca3401be:75d5f1ccc5ef4ff1 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d10b> DVA[0]=<0:188145ef000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26ca9b459f8:9ae99bb2b8456:19d1cf6bfb63436e:b3d24edd01d842bf -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d10e> DVA[0]=<0:188145f5000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=273c6e561f0:9c1723f11ef67:1a01b288e13086c0:1ff3a858920d3669 -- skipping ... error: rpool: blkptr at 0x7f684e2d4000 DVA 0 has invalid OFFSET 18388167655883276288 ``` ### Include any warning/errors/backtraces from the system logs ``` [Sun May 9 14:34:11 2021] PANIC: rpool: blkptr at 00000000a44c5bb3 DVA 0 has invalid OFFSET 18388167655883276288 [Sun May 9 14:34:11 2021] Showing stack for process 4882 [Sun May 9 14:34:11 2021] CPU: 2 PID: 4882 Comm: txg_sync Tainted: P O 5.4.106-1-pve #1 [Sun May 9 14:34:11 2021] Hardware name: System manufacturer System Product Name/TUF GAMING X570-PLUS, BIOS 3603 03/20/2021 [Sun May 9 14:34:11 2021] Call Trace: [Sun May 9 14:34:11 2021] dump_stack+0x6d/0x8b [Sun May 9 14:34:11 2021] spl_dumpstack+0x29/0x2b [spl] [Sun May 9 14:34:11 2021] vcmn_err.cold.1+0x60/0x94 [spl] [Sun May 9 14:34:11 2021] ? find_busiest_group+0x47/0x530 [Sun May 9 14:34:11 2021] ? spl_kmem_cache_alloc+0x7c/0x770 [spl] [Sun May 9 14:34:11 2021] ? put_dec+0x93/0xa0 [Sun May 9 14:34:11 2021] ? number+0x31f/0x360 [Sun May 9 14:34:11 2021] zfs_panic_recover+0x6f/0x90 [zfs] [Sun May 9 14:34:11 2021] zfs_blkptr_verify_log+0x94/0x100 [zfs] [Sun May 9 14:34:11 2021] ? newidle_balance+0x233/0x3c0 [Sun May 9 14:34:11 2021] ? vdev_default_asize+0x5f/0x90 [zfs] [Sun May 9 14:34:11 2021] zfs_blkptr_verify+0x3ab/0x460 [zfs] [Sun May 9 14:34:11 2021] zio_read+0x47/0xc0 [zfs] [Sun May 9 14:34:11 2021] ? dsl_scan_prefetch_thread+0x290/0x290 [zfs] [Sun May 9 14:34:11 2021] scan_exec_io+0x167/0x230 [zfs] [Sun May 9 14:34:11 2021] ? scan_io_queue_insert_impl+0xd7/0xe0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_scrub_cb+0x70e/0x770 [zfs] [Sun May 9 14:34:11 2021] ? spl_kmem_alloc+0xdc/0x130 [spl] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x68d/0xcd0 [zfs] [Sun May 9 14:34:11 2021] ? arc_hdr_set_compress+0x50/0x50 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x5b6/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x826/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visit_rootbp+0xe8/0x150 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitds+0x1ae/0x510 [zfs] [Sun May 9 14:34:11 2021] ? dnode_rele_and_unlock+0xb6/0xe0 [zfs] [Sun May 9 14:34:11 2021] ? dnode_rele+0x3b/0x40 [zfs] [Sun May 9 14:34:11 2021] ? dbuf_rele_and_unlock+0x306/0x6a0 [zfs] [Sun May 9 14:34:11 2021] ? dsl_dataset_hold_obj+0x68c/0x9e0 [zfs] [Sun May 9 14:34:11 2021] ? dbuf_rele+0x3b/0x40 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_sync+0x90e/0x1320 [zfs] [Sun May 9 14:34:11 2021] spa_sync+0x610/0xfe0 [zfs] [Sun May 9 14:34:11 2021] ? mutex_lock+0x12/0x30 [Sun May 9 14:34:11 2021] ? spa_txg_history_init_io+0x104/0x110 [zfs] [Sun May 9 14:34:11 2021] txg_sync_thread+0x2e1/0x4a0 [zfs] [Sun May 9 14:34:11 2021] ? txg_thread_exit.isra.13+0x60/0x60 [zfs] [Sun May 9 14:34:11 2021] thread_generic_wrapper+0x74/0x90 [spl] [Sun May 9 14:34:11 2021] kthread+0x120/0x140 [Sun May 9 14:34:11 2021] ? __thread_exit+0x20/0x20 [spl] [Sun May 9 14:34:11 2021] ? kthread_park+0x90/0x90 [Sun May 9 14:34:11 2021] ret_from_fork+0x1f/0x40 ``` Similar error over one month ago (probably with OpenZFS v0.8.4): ``` [ 541.783776] PANIC: rpool: blkptr at 000000003a3d1018 DVA 0 has invalid OFFSET 18388167655883276288 [ 541.783871] Showing stack for process 630328 [ 541.783940] CPU: 8 PID: 630328 Comm: zvol Tainted: P O 5.4.106-1-pve #1 [ 541.784027] Hardware name: System manufacturer System Product Name/TUF GAMING X570-PLUS, BIOS 3603 03/20/2021 [ 541.784124] Call Trace: [ 541.784195] dump_stack+0x6d/0x8b [ 541.784267] spl_dumpstack+0x29/0x2b [spl] [ 541.784337] vcmn_err.cold.1+0x60/0x94 [spl] [ 541.784407] ? prep_new_page+0x129/0x160 [ 541.784471] ? put_dec+0x93/0xa0 [ 541.784533] ? number+0x31f/0x360 [ 541.784659] zfs_panic_recover+0x6f/0x90 [zfs] [ 541.784787] zfs_blkptr_verify_log+0x94/0x100 [zfs] [ 541.784859] ? __sg_alloc_table+0x70/0x170 [ 541.784927] ? sg_alloc_table+0x23/0x50 [ 541.784997] ? _cond_resched+0x19/0x30 [ 541.785068] ? mutex_lock+0x12/0x30 [ 541.785168] ? aggsum_add+0x188/0x1b0 [zfs] [ 541.785288] ? vdev_default_asize+0x5f/0x90 [zfs] [ 541.785411] zfs_blkptr_verify+0x3ab/0x460 [zfs] [ 541.785528] zio_read+0x47/0xc0 [zfs] [ 541.785627] ? arc_read+0x1210/0x1210 [zfs] [ 541.785734] arc_read+0xb9d/0x1210 [zfs] [ 541.785854] ? dbuf_rele_and_unlock+0x6a0/0x6a0 [zfs] [ 541.785922] ? _cond_resched+0x19/0x30 [ 541.786023] dbuf_read_impl.constprop.33+0x2a8/0x6e0 [zfs] [ 541.786136] dbuf_read+0x1b2/0x510 [zfs] [ 541.786241] dmu_buf_hold_array_by_dnode+0x10c/0x4a0 [zfs] [ 541.786352] dmu_read_uio_dnode+0x49/0xf0 [zfs] [ 541.786464] zvol_read+0x102/0x300 [zfs] [ 541.786527] taskq_thread+0x2f7/0x4e0 [spl] [ 541.786584] ? wake_up_q+0x80/0x80 [ 541.786687] ? zvol_write+0x4e0/0x4e0 [zfs] [ 541.786745] kthread+0x120/0x140 [ 541.786805] ? task_done+0xb0/0xb0 [spl] [ 541.786863] ? kthread_park+0x90/0x90 [ 541.786922] ret_from_fork+0x1f/0x40 ```
1.0
PANIC: rpool: blkptr at ... DVA 0 has invalid OFFSET 18388167655883276288 - ### System information Proxmox | 6.3-6/2184247e --- | --- Distribution Name | Proxmox Distribution Version | 6.3-6/2184247e Linux Kernel | 5.4.106-1-pve Architecture | x86_64 ZFS Version | 2.0.4-pve1 SPL Version | 2.0.4-pve1 ### Describe the problem you're observing System partially stop responding during normal workload (periodic snapshots, send/receive). Kernel reports PANIC but system all local FS (on ZFS) operation hangs. So all services already in RAM are working, but I cannot remotely login using SSH (because it wants to access FS). **Now node is running (all services migrated from this node). So I can do any tests / experiments. I'm ready for your suggestion how to clean up this error.** Questions: * It looks for permanent inconsistency on ZFS (I saw this error also one month before, but after this I put ECC RAM to computer and I thought problem gone). How to find what file / volume is connected with blk number shown in panic ? * There is no problem on HDDs (scanned for bad blocks, SMART also doesn't report anything). * Why kernel stuck and cannot reboot automatically even if kernel cmdline `panic=30` is set? * Can be related to ZIL and CACHE on NVM? ``` config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-ST2000VN004-2E4164_Z529HN5G-part3 ONLINE 0 0 0 ata-ST2000VN004-2E4164_Z529KW0A-part3 ONLINE 0 0 0 logs a5005d25-97e3-4541-a60b-16384c03a7bc ONLINE 0 0 0 cache nvme0n1p5 ONLINE 0 0 0 ``` ### Describe how to reproduce the problem ``` # zdb -bcsvL rpool ... zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d103> DVA[0]=<0:188145da000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26fe304c267:9830389637801:1918c2a69fc3597b:47d95a551770185 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d108> DVA[0]=<0:188145e0000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=273614ce481:9c98ab35fb26a:1a562f4282dd3e4c:289de3f37292b2a6 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d106> DVA[0]=<0:188145e2000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26ab83c199c:9d04818f3c9fa:1a7fa0dfca3401be:75d5f1ccc5ef4ff1 -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d10b> DVA[0]=<0:188145ef000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=26ca9b459f8:9ae99bb2b8456:19d1cf6bfb63436e:b3d24edd01d842bf -- skipping zdb_blkptr_cb: Got error 52 reading <617, 1, 0, 3d10e> DVA[0]=<0:188145f5000:2000> [L0 zvol object] fletcher4 uncompressed unencrypted LE contiguous unique single size=2000L/2000P birth=6373518L/6373518P fill=1 cksum=273c6e561f0:9c1723f11ef67:1a01b288e13086c0:1ff3a858920d3669 -- skipping ... error: rpool: blkptr at 0x7f684e2d4000 DVA 0 has invalid OFFSET 18388167655883276288 ``` ### Include any warning/errors/backtraces from the system logs ``` [Sun May 9 14:34:11 2021] PANIC: rpool: blkptr at 00000000a44c5bb3 DVA 0 has invalid OFFSET 18388167655883276288 [Sun May 9 14:34:11 2021] Showing stack for process 4882 [Sun May 9 14:34:11 2021] CPU: 2 PID: 4882 Comm: txg_sync Tainted: P O 5.4.106-1-pve #1 [Sun May 9 14:34:11 2021] Hardware name: System manufacturer System Product Name/TUF GAMING X570-PLUS, BIOS 3603 03/20/2021 [Sun May 9 14:34:11 2021] Call Trace: [Sun May 9 14:34:11 2021] dump_stack+0x6d/0x8b [Sun May 9 14:34:11 2021] spl_dumpstack+0x29/0x2b [spl] [Sun May 9 14:34:11 2021] vcmn_err.cold.1+0x60/0x94 [spl] [Sun May 9 14:34:11 2021] ? find_busiest_group+0x47/0x530 [Sun May 9 14:34:11 2021] ? spl_kmem_cache_alloc+0x7c/0x770 [spl] [Sun May 9 14:34:11 2021] ? put_dec+0x93/0xa0 [Sun May 9 14:34:11 2021] ? number+0x31f/0x360 [Sun May 9 14:34:11 2021] zfs_panic_recover+0x6f/0x90 [zfs] [Sun May 9 14:34:11 2021] zfs_blkptr_verify_log+0x94/0x100 [zfs] [Sun May 9 14:34:11 2021] ? newidle_balance+0x233/0x3c0 [Sun May 9 14:34:11 2021] ? vdev_default_asize+0x5f/0x90 [zfs] [Sun May 9 14:34:11 2021] zfs_blkptr_verify+0x3ab/0x460 [zfs] [Sun May 9 14:34:11 2021] zio_read+0x47/0xc0 [zfs] [Sun May 9 14:34:11 2021] ? dsl_scan_prefetch_thread+0x290/0x290 [zfs] [Sun May 9 14:34:11 2021] scan_exec_io+0x167/0x230 [zfs] [Sun May 9 14:34:11 2021] ? scan_io_queue_insert_impl+0xd7/0xe0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_scrub_cb+0x70e/0x770 [zfs] [Sun May 9 14:34:11 2021] ? spl_kmem_alloc+0xdc/0x130 [spl] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x68d/0xcd0 [zfs] [Sun May 9 14:34:11 2021] ? arc_hdr_set_compress+0x50/0x50 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x5b6/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x2ac/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitbp+0x826/0xcd0 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visit_rootbp+0xe8/0x150 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_visitds+0x1ae/0x510 [zfs] [Sun May 9 14:34:11 2021] ? dnode_rele_and_unlock+0xb6/0xe0 [zfs] [Sun May 9 14:34:11 2021] ? dnode_rele+0x3b/0x40 [zfs] [Sun May 9 14:34:11 2021] ? dbuf_rele_and_unlock+0x306/0x6a0 [zfs] [Sun May 9 14:34:11 2021] ? dsl_dataset_hold_obj+0x68c/0x9e0 [zfs] [Sun May 9 14:34:11 2021] ? dbuf_rele+0x3b/0x40 [zfs] [Sun May 9 14:34:11 2021] dsl_scan_sync+0x90e/0x1320 [zfs] [Sun May 9 14:34:11 2021] spa_sync+0x610/0xfe0 [zfs] [Sun May 9 14:34:11 2021] ? mutex_lock+0x12/0x30 [Sun May 9 14:34:11 2021] ? spa_txg_history_init_io+0x104/0x110 [zfs] [Sun May 9 14:34:11 2021] txg_sync_thread+0x2e1/0x4a0 [zfs] [Sun May 9 14:34:11 2021] ? txg_thread_exit.isra.13+0x60/0x60 [zfs] [Sun May 9 14:34:11 2021] thread_generic_wrapper+0x74/0x90 [spl] [Sun May 9 14:34:11 2021] kthread+0x120/0x140 [Sun May 9 14:34:11 2021] ? __thread_exit+0x20/0x20 [spl] [Sun May 9 14:34:11 2021] ? kthread_park+0x90/0x90 [Sun May 9 14:34:11 2021] ret_from_fork+0x1f/0x40 ``` Similar error over one month ago (probably with OpenZFS v0.8.4): ``` [ 541.783776] PANIC: rpool: blkptr at 000000003a3d1018 DVA 0 has invalid OFFSET 18388167655883276288 [ 541.783871] Showing stack for process 630328 [ 541.783940] CPU: 8 PID: 630328 Comm: zvol Tainted: P O 5.4.106-1-pve #1 [ 541.784027] Hardware name: System manufacturer System Product Name/TUF GAMING X570-PLUS, BIOS 3603 03/20/2021 [ 541.784124] Call Trace: [ 541.784195] dump_stack+0x6d/0x8b [ 541.784267] spl_dumpstack+0x29/0x2b [spl] [ 541.784337] vcmn_err.cold.1+0x60/0x94 [spl] [ 541.784407] ? prep_new_page+0x129/0x160 [ 541.784471] ? put_dec+0x93/0xa0 [ 541.784533] ? number+0x31f/0x360 [ 541.784659] zfs_panic_recover+0x6f/0x90 [zfs] [ 541.784787] zfs_blkptr_verify_log+0x94/0x100 [zfs] [ 541.784859] ? __sg_alloc_table+0x70/0x170 [ 541.784927] ? sg_alloc_table+0x23/0x50 [ 541.784997] ? _cond_resched+0x19/0x30 [ 541.785068] ? mutex_lock+0x12/0x30 [ 541.785168] ? aggsum_add+0x188/0x1b0 [zfs] [ 541.785288] ? vdev_default_asize+0x5f/0x90 [zfs] [ 541.785411] zfs_blkptr_verify+0x3ab/0x460 [zfs] [ 541.785528] zio_read+0x47/0xc0 [zfs] [ 541.785627] ? arc_read+0x1210/0x1210 [zfs] [ 541.785734] arc_read+0xb9d/0x1210 [zfs] [ 541.785854] ? dbuf_rele_and_unlock+0x6a0/0x6a0 [zfs] [ 541.785922] ? _cond_resched+0x19/0x30 [ 541.786023] dbuf_read_impl.constprop.33+0x2a8/0x6e0 [zfs] [ 541.786136] dbuf_read+0x1b2/0x510 [zfs] [ 541.786241] dmu_buf_hold_array_by_dnode+0x10c/0x4a0 [zfs] [ 541.786352] dmu_read_uio_dnode+0x49/0xf0 [zfs] [ 541.786464] zvol_read+0x102/0x300 [zfs] [ 541.786527] taskq_thread+0x2f7/0x4e0 [spl] [ 541.786584] ? wake_up_q+0x80/0x80 [ 541.786687] ? zvol_write+0x4e0/0x4e0 [zfs] [ 541.786745] kthread+0x120/0x140 [ 541.786805] ? task_done+0xb0/0xb0 [spl] [ 541.786863] ? kthread_park+0x90/0x90 [ 541.786922] ret_from_fork+0x1f/0x40 ```
defect
panic rpool blkptr at dva has invalid offset system information proxmox distribution name proxmox distribution version linux kernel pve architecture zfs version spl version describe the problem you re observing system partially stop responding during normal workload periodic snapshots send receive kernel reports panic but system all local fs on zfs operation hangs so all services already in ram are working but i cannot remotely login using ssh because it wants to access fs now node is running all services migrated from this node so i can do any tests experiments i m ready for your suggestion how to clean up this error questions it looks for permanent inconsistency on zfs i saw this error also one month before but after this i put ecc ram to computer and i thought problem gone how to find what file volume is connected with blk number shown in panic there is no problem on hdds scanned for bad blocks smart also doesn t report anything why kernel stuck and cannot reboot automatically even if kernel cmdline panic is set can be related to zil and cache on nvm config name state read write cksum rpool online mirror online ata online ata online logs online cache online describe how to reproduce the problem zdb bcsvl rpool zdb blkptr cb got error reading dva uncompressed unencrypted le contiguous unique single size birth fill cksum skipping zdb blkptr cb got error reading dva uncompressed unencrypted le contiguous unique single size birth fill cksum skipping zdb blkptr cb got error reading dva uncompressed unencrypted le contiguous unique single size birth fill cksum skipping zdb blkptr cb got error reading dva uncompressed unencrypted le contiguous unique single size birth fill cksum skipping zdb blkptr cb got error reading dva uncompressed unencrypted le contiguous unique single size birth fill cksum skipping error rpool blkptr at dva has invalid offset include any warning errors backtraces from the system logs panic rpool blkptr at dva has invalid offset showing stack for process cpu pid comm txg sync tainted p o pve hardware name system manufacturer system product name tuf gaming plus bios call trace dump stack spl dumpstack vcmn err cold find busiest group spl kmem cache alloc put dec number zfs panic recover zfs blkptr verify log newidle balance vdev default asize zfs blkptr verify zio read dsl scan prefetch thread scan exec io scan io queue insert impl dsl scan scrub cb spl kmem alloc dsl scan visitbp arc hdr set compress dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visitbp dsl scan visit rootbp dsl scan visitds dnode rele and unlock dnode rele dbuf rele and unlock dsl dataset hold obj dbuf rele dsl scan sync spa sync mutex lock spa txg history init io txg sync thread txg thread exit isra thread generic wrapper kthread thread exit kthread park ret from fork similar error over one month ago probably with openzfs panic rpool blkptr at dva has invalid offset showing stack for process cpu pid comm zvol tainted p o pve hardware name system manufacturer system product name tuf gaming plus bios call trace dump stack spl dumpstack vcmn err cold prep new page put dec number zfs panic recover zfs blkptr verify log sg alloc table sg alloc table cond resched mutex lock aggsum add vdev default asize zfs blkptr verify zio read arc read arc read dbuf rele and unlock cond resched dbuf read impl constprop dbuf read dmu buf hold array by dnode dmu read uio dnode zvol read taskq thread wake up q zvol write kthread task done kthread park ret from fork
1
19,555
3,223,158,798
IssuesEvent
2015-10-09 08:15:01
ox-it/ords
https://api.github.com/repos/ox-it/ords
closed
Can't view or edit data
Priority-Critical Type-Defect
To reproduce, go to the database page for an ORDS database on Dev, and click 'View, Edit, and Query Data'. This produces an error message saying that the page doesn't exist.
1.0
Can't view or edit data - To reproduce, go to the database page for an ORDS database on Dev, and click 'View, Edit, and Query Data'. This produces an error message saying that the page doesn't exist.
defect
can t view or edit data to reproduce go to the database page for an ords database on dev and click view edit and query data this produces an error message saying that the page doesn t exist
1
68,273
21,577,035,822
IssuesEvent
2022-05-02 14:43:26
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Soft crash when switching rooms with the thread panel open
T-Defect S-Major Z-Rageshake Z-Soft-Crash O-Uncommon A-Threads
### Steps to reproduce Unsure, but: 1. Click a permalink to a thread root in another room 2. Get redirected to that room, where the thread panel will now be open 3. Switch back to the original room ### Outcome #### What did you expect? No soft crash #### What happened instead? A soft crash (see rageshake) ### Operating system NixOS unstable ### Browser information Firefox 99.0.1 ### URL for webapp develop.element.io ### Application version Element version: 850a250cea91-react-3e31fdb6a71f-js-d190cdc307f1 Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? No
1.0
Soft crash when switching rooms with the thread panel open - ### Steps to reproduce Unsure, but: 1. Click a permalink to a thread root in another room 2. Get redirected to that room, where the thread panel will now be open 3. Switch back to the original room ### Outcome #### What did you expect? No soft crash #### What happened instead? A soft crash (see rageshake) ### Operating system NixOS unstable ### Browser information Firefox 99.0.1 ### URL for webapp develop.element.io ### Application version Element version: 850a250cea91-react-3e31fdb6a71f-js-d190cdc307f1 Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? No
defect
soft crash when switching rooms with the thread panel open steps to reproduce unsure but click a permalink to a thread root in another room get redirected to that room where the thread panel will now be open switch back to the original room outcome what did you expect no soft crash what happened instead a soft crash see rageshake operating system nixos unstable browser information firefox url for webapp develop element io application version element version react js olm version homeserver matrix org will you send logs no
1
71,247
23,506,879,235
IssuesEvent
2022-08-18 13:21:49
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: Unable to create "select" class object , "NoMethodError: undefined method `element_dom_attribute' for nil:NilClass
R-awaiting answer I-defect I-issue-template
### What happened? We have upgraded selenium-webdriver to 4.3.0 version, then we are not able to uses Select class. Selenium::WebDriver::Support::Select.new(webelement) NoMethodError: undefined method `element_dom_attribute' for nil:NilClass from /Users/**/.rvm/gems/ruby-2.7.4/gems/selenium-webdriver-4.3.0/lib/selenium/webdriver/common/element.rb:146:in `dom_attribute' ### How can we reproduce the issue? ```shell Try to create select class object with selenium webdriver version 4.3.0. Selenium::WebDriver::Support::Select.new(element) ``` ### Relevant log output ```shell NoMethodError: undefined method `element_dom_attribute' for nil:NilClass from /Users/**/.rvm/gems/ruby-2.7.4/gems/selenium-webdriver-4.3.0/lib/selenium/webdriver/common/element.rb:146:in `dom_attribute' ``` ### Operating System macOS Monterey ### Selenium version Ruby 4.3.0 ### What are the browser(s) and version(s) where you see this issue? chrome ### What are the browser driver(s) and version(s) where you see this issue? chromedriver ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: Unable to create "select" class object , "NoMethodError: undefined method `element_dom_attribute' for nil:NilClass - ### What happened? We have upgraded selenium-webdriver to 4.3.0 version, then we are not able to uses Select class. Selenium::WebDriver::Support::Select.new(webelement) NoMethodError: undefined method `element_dom_attribute' for nil:NilClass from /Users/**/.rvm/gems/ruby-2.7.4/gems/selenium-webdriver-4.3.0/lib/selenium/webdriver/common/element.rb:146:in `dom_attribute' ### How can we reproduce the issue? ```shell Try to create select class object with selenium webdriver version 4.3.0. Selenium::WebDriver::Support::Select.new(element) ``` ### Relevant log output ```shell NoMethodError: undefined method `element_dom_attribute' for nil:NilClass from /Users/**/.rvm/gems/ruby-2.7.4/gems/selenium-webdriver-4.3.0/lib/selenium/webdriver/common/element.rb:146:in `dom_attribute' ``` ### Operating System macOS Monterey ### Selenium version Ruby 4.3.0 ### What are the browser(s) and version(s) where you see this issue? chrome ### What are the browser driver(s) and version(s) where you see this issue? chromedriver ### Are you using Selenium Grid? _No response_
defect
unable to create select class object nomethoderror undefined method element dom attribute for nil nilclass what happened we have upgraded selenium webdriver to version then we are not able to uses select class selenium webdriver support select new webelement nomethoderror undefined method element dom attribute for nil nilclass from users rvm gems ruby gems selenium webdriver lib selenium webdriver common element rb in dom attribute how can we reproduce the issue shell try to create select class object with selenium webdriver version selenium webdriver support select new element relevant log output shell nomethoderror undefined method element dom attribute for nil nilclass from users rvm gems ruby gems selenium webdriver lib selenium webdriver common element rb in dom attribute operating system macos monterey selenium version ruby what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response
1
72,900
24,350,451,784
IssuesEvent
2022-10-02 21:49:57
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
`Something went wrong!` on joining another another room on a space
T-Defect
### Steps to reproduce 1. Where are you starting? What can you see? I joined the OpenStreetMap space by clicking on https://matrix.to/#/#osm-space:matrix.org, that way I joined the `osm-fr` room then I explored the list of rooms of this space to join `OSM General`. 3. What do you click? However when clicking on the `Join` button next to the `OSM General` room I got a `Something went wrong!` while actually joining `OSM General` room... 4. More steps… Here is a video I recorded showing this issue: https://user-images.githubusercontent.com/12752145/193477794-22fed614-4b93-48f3-8944-0db3546acb9e.mp4 ### Outcome #### What did you expect? Graphically join the wanted room on a space (or at least have a confirmation). #### What happened instead? Got a `Something went wrong` (but the room in the space is correctly joined). ### Operating system Ubuntu 22.04.1 LTS ### Application version Element version: 1.11.8 Olm version: 3.2.12 ### How did you install the app? https://element.io/get-started#linux-details (instructions followed after Mon Apr 25 13:58:07 2022) ### Homeserver matrix.org ### Will you send logs? Yes
1.0
`Something went wrong!` on joining another another room on a space - ### Steps to reproduce 1. Where are you starting? What can you see? I joined the OpenStreetMap space by clicking on https://matrix.to/#/#osm-space:matrix.org, that way I joined the `osm-fr` room then I explored the list of rooms of this space to join `OSM General`. 3. What do you click? However when clicking on the `Join` button next to the `OSM General` room I got a `Something went wrong!` while actually joining `OSM General` room... 4. More steps… Here is a video I recorded showing this issue: https://user-images.githubusercontent.com/12752145/193477794-22fed614-4b93-48f3-8944-0db3546acb9e.mp4 ### Outcome #### What did you expect? Graphically join the wanted room on a space (or at least have a confirmation). #### What happened instead? Got a `Something went wrong` (but the room in the space is correctly joined). ### Operating system Ubuntu 22.04.1 LTS ### Application version Element version: 1.11.8 Olm version: 3.2.12 ### How did you install the app? https://element.io/get-started#linux-details (instructions followed after Mon Apr 25 13:58:07 2022) ### Homeserver matrix.org ### Will you send logs? Yes
defect
something went wrong on joining another another room on a space steps to reproduce where are you starting what can you see i joined the openstreetmap space by clicking on that way i joined the osm fr room then i explored the list of rooms of this space to join osm general what do you click however when clicking on the join button next to the osm general room i got a something went wrong while actually joining osm general room more steps… here is a video i recorded showing this issue outcome what did you expect graphically join the wanted room on a space or at least have a confirmation what happened instead got a something went wrong but the room in the space is correctly joined operating system ubuntu lts application version element version olm version how did you install the app instructions followed after mon apr homeserver matrix org will you send logs yes
1
250,423
27,086,698,647
IssuesEvent
2023-02-14 17:30:25
sharad16j/sharad16j.github.io
https://api.github.com/repos/sharad16j/sharad16j.github.io
opened
CVE-2020-11023 (Medium) detected in jquery-3.3.1.min.js, jquery-1.11.3.min.js
security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-1.11.3.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p> <p>Path to vulnerable library: /js/libs/jquery-3.3.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to dependency file: /index.html</p> <p>Path to vulnerable library: /index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/sharad16j/sharad16j.github.io/commit/d8cb57e2f62b9467e3f5334af04eaf318de29b49">d8cb57e2f62b9467e3f5334af04eaf318de29b49</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11023 (Medium) detected in jquery-3.3.1.min.js, jquery-1.11.3.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-1.11.3.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p> <p>Path to vulnerable library: /js/libs/jquery-3.3.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to dependency file: /index.html</p> <p>Path to vulnerable library: /index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/sharad16j/sharad16j.github.io/commit/d8cb57e2f62b9467e3f5334af04eaf318de29b49">d8cb57e2f62b9467e3f5334af04eaf318de29b49</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to vulnerable library js libs jquery min js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file index html path to vulnerable library index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with mend
0
77,385
26,959,003,787
IssuesEvent
2023-02-08 16:46:45
jlaffaye/ftp
https://api.github.com/repos/jlaffaye/ftp
closed
Data connections with ExplicitTLS hang forever
defect
**Describe the bug** Data connections with TLS Explicit mode seem to be broken **To Reproduce** ```go package main import ( "crypto/tls" "log" "os" "time" "github.com/jlaffaye/ftp" ) func main() { if len(os.Args) != 4 { log.Fatalf("Expecting %s ftp.example.org username password", os.Args[0]) } host, username, password := os.Args[1], os.Args[2], os.Args[3] port := "21" var tlsConf = &tls.Config{ ServerName: host, } c, err := ftp.Dial(host+":"+port, ftp.DialWithExplicitTLS(tlsConf), ftp.DialWithTimeout(20*time.Second), ftp.DialWithDebugOutput(os.Stdout), ) if err != nil { log.Fatal(err) } err = c.Login(username, password) if err != nil { log.Fatal(err) } // Make a listing entries, err := c.List(".") if err != nil { log.Fatal(err) } for _, entry := range entries { log.Printf("%#v", entry) } if err := c.Quit(); err != nil { log.Fatal(err) } } ``` Using this program to connect hangs on the listing **Expected behavior** I expected to see a listing **FTP server** This is connecting to a Hetzner storage box. I've also had reports of the same problem with pureftpd (See https://github.com/rclone/rclone/issues/6426 ) **Debug output** ``` 220 ProFTPD Server (Hetzner Backup) [::ffff:78.47.22.109] AUTH TLS 234 AUTH TLS successful USER uXXXXXX 331 Password required for uXXXXXX PASS supersecretpassword 230 User uXXXXXX logged in FEAT 211-Features: AUTH TLS CCC CLNT EPRT EPSV HOST LANG fr-FR.UTF-8;fr-FR;en-US.UTF-8;en-US;it-IT.UTF-8;it-IT;es-ES.UTF-8;es-ES;bg-BG.UTF-8;bg-BG;ko-KR.UTF-8;ko-KR;zh-TW.UTF-8;zh-TW;ru-RU.UTF-8;ru-RU;ja-JP.UTF-8;ja-JP;zh-CN.UTF-8;zh-CN MDTM MFF modify;UNIX.group;UNIX.mode; MFMT MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.groupname*;UNIX.mode*;UNIX.owner*;UNIX.ownername*; PBSZ PROT RANG STREAM REST STREAM SIZE SSCN TVFS UTF8 211 End TYPE I 200 Type set to I OPTS UTF8 ON 200 UTF8 set to on PBSZ 0 200 PBSZ 0 successful PROT P 200 Protection set to Private EPSV 229 Entering Extended Passive Mode (|||59900|) 2022/09/16 13:19:41 context deadline exceeded ``` **Additional context** I bisected the problem to this commit 212daf295f0e6ae44131ea12ee353a13fca71091 ``` commit 212daf295f0e6ae44131ea12ee353a13fca71091 Author: Julien Laffaye <jlaffaye@freebsd.org> Date: Mon Feb 28 20:43:42 2022 -0500 Use tls.DialWithDialer which does the handshake tls.DialWithDialer also better handle special error cases ftp.go | 26 +++----------------------- 1 file changed, 3 insertions(+), 23 deletions(-) ``` What appears to be happening is that the connection hangs in the tls Handshake. If I make this small patch to HEAD everything works fine ```patch --- a/ftp.go +++ b/ftp.go @@ -559,7 +559,12 @@ func (c *ServerConn) openDataConn() (net.Conn, error) { } if c.options.tlsConfig != nil { - return tls.DialWithDialer(&c.options.dialer, "tcp", addr, c.options.tlsConfig) + conn, err := c.options.dialer.Dial("tcp", addr) + if err != nil { + return nil, err + } + tlsConn := tls.Client(conn, c.options.tlsConfig) + return tlsConn, nil } return c.options.dialer.Dial("tcp", addr) ``` BUT if I do a tls.Handshake as well (which is what `tls.DialWithDialer`) does then it hangs up ```patch --- a/ftp.go +++ b/ftp.go @@ -559,7 +559,16 @@ func (c *ServerConn) openDataConn() (net.Conn, error) { } if c.options.tlsConfig != nil { - return tls.DialWithDialer(&c.options.dialer, "tcp", addr, c.options.tlsConfig) + conn, err := c.options.dialer.Dial("tcp", addr) + if err != nil { + return nil, err + } + tlsConn := tls.Client(conn, c.options.tlsConfig) + err = tlsConn.Handshake() + if err != nil { + return nil, err + } + return tlsConn, nil } return c.options.dialer.Dial("tcp", addr) ``` I can only think this is either a bug in Go TLS or a bug in openSSL as used by proftpd and pureftpd, but I'm not sure and I'd appreciate any help!
1.0
Data connections with ExplicitTLS hang forever - **Describe the bug** Data connections with TLS Explicit mode seem to be broken **To Reproduce** ```go package main import ( "crypto/tls" "log" "os" "time" "github.com/jlaffaye/ftp" ) func main() { if len(os.Args) != 4 { log.Fatalf("Expecting %s ftp.example.org username password", os.Args[0]) } host, username, password := os.Args[1], os.Args[2], os.Args[3] port := "21" var tlsConf = &tls.Config{ ServerName: host, } c, err := ftp.Dial(host+":"+port, ftp.DialWithExplicitTLS(tlsConf), ftp.DialWithTimeout(20*time.Second), ftp.DialWithDebugOutput(os.Stdout), ) if err != nil { log.Fatal(err) } err = c.Login(username, password) if err != nil { log.Fatal(err) } // Make a listing entries, err := c.List(".") if err != nil { log.Fatal(err) } for _, entry := range entries { log.Printf("%#v", entry) } if err := c.Quit(); err != nil { log.Fatal(err) } } ``` Using this program to connect hangs on the listing **Expected behavior** I expected to see a listing **FTP server** This is connecting to a Hetzner storage box. I've also had reports of the same problem with pureftpd (See https://github.com/rclone/rclone/issues/6426 ) **Debug output** ``` 220 ProFTPD Server (Hetzner Backup) [::ffff:78.47.22.109] AUTH TLS 234 AUTH TLS successful USER uXXXXXX 331 Password required for uXXXXXX PASS supersecretpassword 230 User uXXXXXX logged in FEAT 211-Features: AUTH TLS CCC CLNT EPRT EPSV HOST LANG fr-FR.UTF-8;fr-FR;en-US.UTF-8;en-US;it-IT.UTF-8;it-IT;es-ES.UTF-8;es-ES;bg-BG.UTF-8;bg-BG;ko-KR.UTF-8;ko-KR;zh-TW.UTF-8;zh-TW;ru-RU.UTF-8;ru-RU;ja-JP.UTF-8;ja-JP;zh-CN.UTF-8;zh-CN MDTM MFF modify;UNIX.group;UNIX.mode; MFMT MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.groupname*;UNIX.mode*;UNIX.owner*;UNIX.ownername*; PBSZ PROT RANG STREAM REST STREAM SIZE SSCN TVFS UTF8 211 End TYPE I 200 Type set to I OPTS UTF8 ON 200 UTF8 set to on PBSZ 0 200 PBSZ 0 successful PROT P 200 Protection set to Private EPSV 229 Entering Extended Passive Mode (|||59900|) 2022/09/16 13:19:41 context deadline exceeded ``` **Additional context** I bisected the problem to this commit 212daf295f0e6ae44131ea12ee353a13fca71091 ``` commit 212daf295f0e6ae44131ea12ee353a13fca71091 Author: Julien Laffaye <jlaffaye@freebsd.org> Date: Mon Feb 28 20:43:42 2022 -0500 Use tls.DialWithDialer which does the handshake tls.DialWithDialer also better handle special error cases ftp.go | 26 +++----------------------- 1 file changed, 3 insertions(+), 23 deletions(-) ``` What appears to be happening is that the connection hangs in the tls Handshake. If I make this small patch to HEAD everything works fine ```patch --- a/ftp.go +++ b/ftp.go @@ -559,7 +559,12 @@ func (c *ServerConn) openDataConn() (net.Conn, error) { } if c.options.tlsConfig != nil { - return tls.DialWithDialer(&c.options.dialer, "tcp", addr, c.options.tlsConfig) + conn, err := c.options.dialer.Dial("tcp", addr) + if err != nil { + return nil, err + } + tlsConn := tls.Client(conn, c.options.tlsConfig) + return tlsConn, nil } return c.options.dialer.Dial("tcp", addr) ``` BUT if I do a tls.Handshake as well (which is what `tls.DialWithDialer`) does then it hangs up ```patch --- a/ftp.go +++ b/ftp.go @@ -559,7 +559,16 @@ func (c *ServerConn) openDataConn() (net.Conn, error) { } if c.options.tlsConfig != nil { - return tls.DialWithDialer(&c.options.dialer, "tcp", addr, c.options.tlsConfig) + conn, err := c.options.dialer.Dial("tcp", addr) + if err != nil { + return nil, err + } + tlsConn := tls.Client(conn, c.options.tlsConfig) + err = tlsConn.Handshake() + if err != nil { + return nil, err + } + return tlsConn, nil } return c.options.dialer.Dial("tcp", addr) ``` I can only think this is either a bug in Go TLS or a bug in openSSL as used by proftpd and pureftpd, but I'm not sure and I'd appreciate any help!
defect
data connections with explicittls hang forever describe the bug data connections with tls explicit mode seem to be broken to reproduce go package main import crypto tls log os time github com jlaffaye ftp func main if len os args log fatalf expecting s ftp example org username password os args host username password os args os args os args port var tlsconf tls config servername host c err ftp dial host port ftp dialwithexplicittls tlsconf ftp dialwithtimeout time second ftp dialwithdebugoutput os stdout if err nil log fatal err err c login username password if err nil log fatal err make a listing entries err c list if err nil log fatal err for entry range entries log printf v entry if err c quit err nil log fatal err using this program to connect hangs on the listing expected behavior i expected to see a listing ftp server this is connecting to a hetzner storage box i ve also had reports of the same problem with pureftpd see debug output proftpd server hetzner backup auth tls auth tls successful user uxxxxxx password required for uxxxxxx pass supersecretpassword user uxxxxxx logged in feat features auth tls ccc clnt eprt epsv host lang fr fr utf fr fr en us utf en us it it utf it it es es utf es es bg bg utf bg bg ko kr utf ko kr zh tw utf zh tw ru ru utf ru ru ja jp utf ja jp zh cn utf zh cn mdtm mff modify unix group unix mode mfmt mlst modify perm size type unique unix group unix groupname unix mode unix owner unix ownername pbsz prot rang stream rest stream size sscn tvfs end type i type set to i opts on set to on pbsz pbsz successful prot p protection set to private epsv entering extended passive mode context deadline exceeded additional context i bisected the problem to this commit commit author julien laffaye date mon feb use tls dialwithdialer which does the handshake tls dialwithdialer also better handle special error cases ftp go file changed insertions deletions what appears to be happening is that the connection hangs in the tls handshake if i make this small patch to head everything works fine patch a ftp go b ftp go func c serverconn opendataconn net conn error if c options tlsconfig nil return tls dialwithdialer c options dialer tcp addr c options tlsconfig conn err c options dialer dial tcp addr if err nil return nil err tlsconn tls client conn c options tlsconfig return tlsconn nil return c options dialer dial tcp addr but if i do a tls handshake as well which is what tls dialwithdialer does then it hangs up patch a ftp go b ftp go func c serverconn opendataconn net conn error if c options tlsconfig nil return tls dialwithdialer c options dialer tcp addr c options tlsconfig conn err c options dialer dial tcp addr if err nil return nil err tlsconn tls client conn c options tlsconfig err tlsconn handshake if err nil return nil err return tlsconn nil return c options dialer dial tcp addr i can only think this is either a bug in go tls or a bug in openssl as used by proftpd and pureftpd but i m not sure and i d appreciate any help
1
53,911
13,262,506,375
IssuesEvent
2020-08-20 21:56:33
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
Multi-Core template jobs request lots of RAM (Trac #2324)
Migrated from Trac csky defect
Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2324">https://code.icecube.wisc.edu/projects/icecube/ticket/2324</a>, reported by steve.sclafaniand owned by steve.sclafani</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-11T16:07:37", "_ts": "1560269257226754", "description": "Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts", "reporter": "steve.sclafani", "cc": "", "resolution": "fixed", "time": "2019-06-10T18:37:17", "component": "csky", "summary": "Multi-Core template jobs request lots of RAM", "priority": "normal", "keywords": "", "milestone": "", "owner": "steve.sclafani", "type": "defect" } ``` </p> </details>
1.0
Multi-Core template jobs request lots of RAM (Trac #2324) - Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2324">https://code.icecube.wisc.edu/projects/icecube/ticket/2324</a>, reported by steve.sclafaniand owned by steve.sclafani</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-11T16:07:37", "_ts": "1560269257226754", "description": "Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts", "reporter": "steve.sclafani", "cc": "", "resolution": "fixed", "time": "2019-06-10T18:37:17", "component": "csky", "summary": "Multi-Core template jobs request lots of RAM", "priority": "normal", "keywords": "", "milestone": "", "owner": "steve.sclafani", "type": "defect" } ``` </p> </details>
defect
multi core template jobs request lots of ram trac calculating template sensitivity causes jobs to request per cpu instead of total this immediatly eats up all the ram available on a cobalt cluster this happens when the sensitivity trials start not creating the pdfs or anything so far have only tested on cobalts migrated from json status closed changetime ts description calculating template sensitivity causes jobs to request per cpu instead of total this immediatly eats up all the ram available on a cobalt cluster this happens when the sensitivity trials start not creating the pdfs or anything so far have only tested on cobalts reporter steve sclafani cc resolution fixed time component csky summary multi core template jobs request lots of ram priority normal keywords milestone owner steve sclafani type defect
1
140,747
5,415,572,670
IssuesEvent
2017-03-01 21:56:00
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
kind/flake priority/backlog sig/node
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13693/ Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227 Expected error: <*errors.StatusError | 0xc82207de00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: ""}, Status: "Failure", Message: "the server does not allow access to the requested resource (get pods pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003)", Reason: "Forbidden", Details: { Name: "pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003", Group: "", Kind: "pods", Causes: [ { Type: "UnexpectedServerResponse", Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-pods-3hps0/pods/pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003\"", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 403, }, } the server does not allow access to the requested resource (get pods pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003) not to have occurred ```
1.0
[k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13693/ Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227 Expected error: <*errors.StatusError | 0xc82207de00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: ""}, Status: "Failure", Message: "the server does not allow access to the requested resource (get pods pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003)", Reason: "Forbidden", Details: { Name: "pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003", Group: "", Kind: "pods", Causes: [ { Type: "UnexpectedServerResponse", Message: "Forbidden: \"/api/v1/namespaces/e2e-tests-pods-3hps0/pods/pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003\"", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 403, }, } the server does not allow access to the requested resource (get pods pod-hostip-b86aa67e-7e13-11e6-a0e9-0242ac110003) not to have occurred ```
non_defect
pods should get a host ip kubernetes suite failed pods should get a host ip kubernetes suite go src io kubernetes output dockerized go src io kubernetes test pods go expected error errstatus typemeta kind apiversion listmeta selflink resourceversion status failure message the server does not allow access to the requested resource get pods pod hostip reason forbidden details name pod hostip group kind pods causes type unexpectedserverresponse message forbidden api namespaces tests pods pods pod hostip field retryafterseconds code the server does not allow access to the requested resource get pods pod hostip not to have occurred
0
76,909
26,664,028,940
IssuesEvent
2023-01-26 00:37:51
damen-dotcms/issue-test
https://api.github.com/repos/damen-dotcms/issue-test
closed
Remove all the colons
Type : Defect
[![Screenshot_2022-12-05_21-27-36.png](https://mrkr.io/s/638e62480e5e8b45720722f0/2)](https://mrkr.io/s/638e62480e5e8b45720722f0/0) --- **Reported by:** Melissa Rojas Rodríguez (melissa.rojas@dotcms.com) **Source URL:** [https://demo.dotcms.com/dotAdmin/#/c/c_Blog-Entries](https://demo.dotcms.com/dotAdmin/#/c/c_Blog-Entries) **Issue details:** [Open in Marker.io](https://app.marker.io/i/638e62480e5e8b45720722f3_d688338ddee4174f?advanced=1) <table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Chrome 107.0.0.0</td></tr><tr><td><strong>Screen Size</strong></td><td>1440 x 900</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.14.6</td></tr><tr><td><strong>Viewport Size</strong></td><td>1440 x 821</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@&#8203;2x</td></tr></table>
1.0
Remove all the colons - [![Screenshot_2022-12-05_21-27-36.png](https://mrkr.io/s/638e62480e5e8b45720722f0/2)](https://mrkr.io/s/638e62480e5e8b45720722f0/0) --- **Reported by:** Melissa Rojas Rodríguez (melissa.rojas@dotcms.com) **Source URL:** [https://demo.dotcms.com/dotAdmin/#/c/c_Blog-Entries](https://demo.dotcms.com/dotAdmin/#/c/c_Blog-Entries) **Issue details:** [Open in Marker.io](https://app.marker.io/i/638e62480e5e8b45720722f3_d688338ddee4174f?advanced=1) <table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Chrome 107.0.0.0</td></tr><tr><td><strong>Screen Size</strong></td><td>1440 x 900</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.14.6</td></tr><tr><td><strong>Viewport Size</strong></td><td>1440 x 821</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@&#8203;2x</td></tr></table>
defect
remove all the colons reported by melissa rojas rodríguez melissa rojas dotcms com source url issue details device type desktop browser chrome screen size x os os x viewport size x zoom level pixel ratio
1
16,309
2,889,333,173
IssuesEvent
2015-06-13 09:54:57
kuribot/boilerpipe
https://api.github.com/repos/kuribot/boilerpipe
closed
How to debug the result?
auto-migrated Priority-Medium Type-Defect
``` I just interested to know if a block has been removed, what's the reason? As I see in the source code, each block is labelled for different conditions. How to easily represent it, for example in the tooltip of that block when viewing the highlighted html. Is it possible? ``` Original issue reported on code.google.com by `jadidine...@gmail.com` on 24 Jan 2015 at 2:27
1.0
How to debug the result? - ``` I just interested to know if a block has been removed, what's the reason? As I see in the source code, each block is labelled for different conditions. How to easily represent it, for example in the tooltip of that block when viewing the highlighted html. Is it possible? ``` Original issue reported on code.google.com by `jadidine...@gmail.com` on 24 Jan 2015 at 2:27
defect
how to debug the result i just interested to know if a block has been removed what s the reason as i see in the source code each block is labelled for different conditions how to easily represent it for example in the tooltip of that block when viewing the highlighted html is it possible original issue reported on code google com by jadidine gmail com on jan at
1
4,020
2,610,085,991
IssuesEvent
2015-02-26 18:26:08
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳长青春痘怎么祛
auto-migrated Priority-Medium Type-Defect
``` 深圳长青春痘怎么祛【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:08
1.0
深圳长青春痘怎么祛 - ``` 深圳长青春痘怎么祛【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:08
defect
深圳长青春痘怎么祛 深圳长青春痘怎么祛【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
1
216,653
16,794,398,360
IssuesEvent
2021-06-16 00:04:08
microsoft/appcenter
https://api.github.com/repos/microsoft/appcenter
closed
Support for mobile browsers or WebView context
feature request test
**Describe the solution you'd like** Since a lot of our mobile apps are using web login, we would like to be able to run automation on mobile browser. Currently, App Center cannot detect web element. <br/> We are aware that currently this feature is not available and stated in the documentation. <img width="527" alt="Screenshot_12" src="https://user-images.githubusercontent.com/26863119/110426093-6e826100-80e0-11eb-808c-08ec716a4338.png"> <br/> <img width="720" alt="Screenshot_8" src="https://user-images.githubusercontent.com/26863119/110425931-28c59880-80e0-11eb-9c56-79e5216ad868.png">
1.0
Support for mobile browsers or WebView context - **Describe the solution you'd like** Since a lot of our mobile apps are using web login, we would like to be able to run automation on mobile browser. Currently, App Center cannot detect web element. <br/> We are aware that currently this feature is not available and stated in the documentation. <img width="527" alt="Screenshot_12" src="https://user-images.githubusercontent.com/26863119/110426093-6e826100-80e0-11eb-808c-08ec716a4338.png"> <br/> <img width="720" alt="Screenshot_8" src="https://user-images.githubusercontent.com/26863119/110425931-28c59880-80e0-11eb-9c56-79e5216ad868.png">
non_defect
support for mobile browsers or webview context describe the solution you d like since a lot of our mobile apps are using web login we would like to be able to run automation on mobile browser currently app center cannot detect web element we are aware that currently this feature is not available and stated in the documentation img width alt screenshot src img width alt screenshot src
0
579,005
17,169,932,528
IssuesEvent
2021-07-15 01:49:41
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
SocketsHttpHandler should use first available connection (don't wait on new connection creation)
Cost:M Priority:2 area-System.Net.Http enhancement in pr tenet-performance
[AB#1254012](https://devdiv.visualstudio.com/10e66e43-9645-4201-b128-0fdc3769cc17/_workitems/edit/1254012) Currently, when we have no idle HTTP/1.1 connections (and are not at the connection limit) and a new request comes in, we will create a new connection and then use it for the new request. The connect itself may take a while, especially for HTTPS connections. During this time, an existing connection may complete its current request and become available. If so, we should just use this idle connection instead of waiting for the new connection to finish its connect. This should help request latency, and probably reduce the number of connections created during burst usage as well (see https://github.com/dotnet/runtime/issues/43764). More details: When this happens, the new connection can still be put in the pool as idle, even if we don't have a request for it at the moment. If another new request comes in, we will use it like any other idle connection. Note also that we should avoid creating new connections when we already have enough pending connections to (eventually) handle all the pending requests. Consider this scenario, starting with no connections: - Request A arrives, we initiate a connect for Connection 1, wait for it, and start processing A on it - Request B arrives, we initiate a connect for Connection 2. Before the connect completes, A finishes and Connection 1 becomes available. We start processing B on Connection 1. - Request C arrives. There's no established connection for it to use, but there *is* a pending connection (Connection 2). So there's no need for us to initiate another connection right now. - Request D arrives. Now, we have 0 available connections, 1 pending connection (Connection 2), and 2 pending requests (C and D). So we should go ahead and initiate a new connect for Connection 3.
1.0
SocketsHttpHandler should use first available connection (don't wait on new connection creation) - [AB#1254012](https://devdiv.visualstudio.com/10e66e43-9645-4201-b128-0fdc3769cc17/_workitems/edit/1254012) Currently, when we have no idle HTTP/1.1 connections (and are not at the connection limit) and a new request comes in, we will create a new connection and then use it for the new request. The connect itself may take a while, especially for HTTPS connections. During this time, an existing connection may complete its current request and become available. If so, we should just use this idle connection instead of waiting for the new connection to finish its connect. This should help request latency, and probably reduce the number of connections created during burst usage as well (see https://github.com/dotnet/runtime/issues/43764). More details: When this happens, the new connection can still be put in the pool as idle, even if we don't have a request for it at the moment. If another new request comes in, we will use it like any other idle connection. Note also that we should avoid creating new connections when we already have enough pending connections to (eventually) handle all the pending requests. Consider this scenario, starting with no connections: - Request A arrives, we initiate a connect for Connection 1, wait for it, and start processing A on it - Request B arrives, we initiate a connect for Connection 2. Before the connect completes, A finishes and Connection 1 becomes available. We start processing B on Connection 1. - Request C arrives. There's no established connection for it to use, but there *is* a pending connection (Connection 2). So there's no need for us to initiate another connection right now. - Request D arrives. Now, we have 0 available connections, 1 pending connection (Connection 2), and 2 pending requests (C and D). So we should go ahead and initiate a new connect for Connection 3.
non_defect
socketshttphandler should use first available connection don t wait on new connection creation currently when we have no idle http connections and are not at the connection limit and a new request comes in we will create a new connection and then use it for the new request the connect itself may take a while especially for https connections during this time an existing connection may complete its current request and become available if so we should just use this idle connection instead of waiting for the new connection to finish its connect this should help request latency and probably reduce the number of connections created during burst usage as well see more details when this happens the new connection can still be put in the pool as idle even if we don t have a request for it at the moment if another new request comes in we will use it like any other idle connection note also that we should avoid creating new connections when we already have enough pending connections to eventually handle all the pending requests consider this scenario starting with no connections request a arrives we initiate a connect for connection wait for it and start processing a on it request b arrives we initiate a connect for connection before the connect completes a finishes and connection becomes available we start processing b on connection request c arrives there s no established connection for it to use but there is a pending connection connection so there s no need for us to initiate another connection right now request d arrives now we have available connections pending connection connection and pending requests c and d so we should go ahead and initiate a new connect for connection
0
96,897
8,636,706,689
IssuesEvent
2018-11-23 08:43:36
opsdroid/opsdroid
https://api.github.com/repos/opsdroid/opsdroid
closed
Refactor Tests: Replace logmock with self.assertLogs
beginner help wanted low hanging fruit tests
Last year I needed to figure out how to assert if a logging level was called. I tried a few things and came up with the solution to mock the `_LOGGER.<level>` and then assert if logmock was called. I have noticed that you can use a `self.assertLogs(self, logger, level)` to check if a message was logged. Since when we are testing we don't really care about what message was logged but if the message was logging we should replace every `amock.patch('opsdroid.core._LOGGER.exception') as logmock` with the `self.assertLogs`. This should make the tests a bit more readable and easier to maintain in the future. This Issue is also great for new contributors since it's quite easy to implement. For a better explanation check the example: _Old style: test_main.py:132-136_ ```python def test_welcome_message(self): config = {"welcome-message": True} with mock.patch('opsdroid.__main__._LOGGER.info') as logmock: opsdroid.welcome_message(config) self.assertTrue(logmock.called) ``` _Refactored test_ ```python def test_welcome_message(self): config = {"welcome-message": True} opsdroid.welcome_message(config) self.assertLogs('_LOGGER', 'info') ``` If you need any help with this issue make sure to check out [gitter channel](https://gitter.im/opsdroid/general).
1.0
Refactor Tests: Replace logmock with self.assertLogs - Last year I needed to figure out how to assert if a logging level was called. I tried a few things and came up with the solution to mock the `_LOGGER.<level>` and then assert if logmock was called. I have noticed that you can use a `self.assertLogs(self, logger, level)` to check if a message was logged. Since when we are testing we don't really care about what message was logged but if the message was logging we should replace every `amock.patch('opsdroid.core._LOGGER.exception') as logmock` with the `self.assertLogs`. This should make the tests a bit more readable and easier to maintain in the future. This Issue is also great for new contributors since it's quite easy to implement. For a better explanation check the example: _Old style: test_main.py:132-136_ ```python def test_welcome_message(self): config = {"welcome-message": True} with mock.patch('opsdroid.__main__._LOGGER.info') as logmock: opsdroid.welcome_message(config) self.assertTrue(logmock.called) ``` _Refactored test_ ```python def test_welcome_message(self): config = {"welcome-message": True} opsdroid.welcome_message(config) self.assertLogs('_LOGGER', 'info') ``` If you need any help with this issue make sure to check out [gitter channel](https://gitter.im/opsdroid/general).
non_defect
refactor tests replace logmock with self assertlogs last year i needed to figure out how to assert if a logging level was called i tried a few things and came up with the solution to mock the logger and then assert if logmock was called i have noticed that you can use a self assertlogs self logger level to check if a message was logged since when we are testing we don t really care about what message was logged but if the message was logging we should replace every amock patch opsdroid core logger exception as logmock with the self assertlogs this should make the tests a bit more readable and easier to maintain in the future this issue is also great for new contributors since it s quite easy to implement for a better explanation check the example old style test main py python def test welcome message self config welcome message true with mock patch opsdroid main logger info as logmock opsdroid welcome message config self asserttrue logmock called refactored test python def test welcome message self config welcome message true opsdroid welcome message config self assertlogs logger info if you need any help with this issue make sure to check out
0
73,571
24,695,400,561
IssuesEvent
2022-10-19 11:43:33
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
InputNumber: "minValue>0 <1" shows js warning
:lady_beetle: defect workaround
Originally fixed in https://github.com/primefaces/primefaces/issues/8125 inputNumber is still broken if minValue is a decimal between 0 and 1, just try this to see: ```xhtml <p:inputNumber minValue="0.01"/> ``` I think that here: https://github.com/primefaces/primefaces/blob/e8d3e2968dac710b725a9f2e761178c22063f0d0/primefaces/src/main/resources/META-INF/resources/primefaces/inputnumber/1-inputnumber.js#L42-L45 the if condition should be: ```js if (this.cfg.minimumValue > 0.0000001 || this.cfg.maximumValue < 0) ``` cc @NicolaIsotta
1.0
InputNumber: "minValue>0 <1" shows js warning - Originally fixed in https://github.com/primefaces/primefaces/issues/8125 inputNumber is still broken if minValue is a decimal between 0 and 1, just try this to see: ```xhtml <p:inputNumber minValue="0.01"/> ``` I think that here: https://github.com/primefaces/primefaces/blob/e8d3e2968dac710b725a9f2e761178c22063f0d0/primefaces/src/main/resources/META-INF/resources/primefaces/inputnumber/1-inputnumber.js#L42-L45 the if condition should be: ```js if (this.cfg.minimumValue > 0.0000001 || this.cfg.maximumValue < 0) ``` cc @NicolaIsotta
defect
inputnumber minvalue shows js warning originally fixed in inputnumber is still broken if minvalue is a decimal between and just try this to see xhtml i think that here the if condition should be js if this cfg minimumvalue this cfg maximumvalue cc nicolaisotta
1
199,772
15,782,436,552
IssuesEvent
2021-04-01 12:47:55
ethereum/solidity
https://api.github.com/repos/ethereum/solidity
closed
[Doc] Mention that low-level calls do not have an extcodesize check
documentation :book:
High-level function calls first check that the called address has code by using `extcodesize`. This works around a peculiar detail of the EVM: Calls to contracts without code succeed silently instead of failing. For low-level calls (members of ``address``), this check is not present and that fact should be documented there.
1.0
[Doc] Mention that low-level calls do not have an extcodesize check - High-level function calls first check that the called address has code by using `extcodesize`. This works around a peculiar detail of the EVM: Calls to contracts without code succeed silently instead of failing. For low-level calls (members of ``address``), this check is not present and that fact should be documented there.
non_defect
mention that low level calls do not have an extcodesize check high level function calls first check that the called address has code by using extcodesize this works around a peculiar detail of the evm calls to contracts without code succeed silently instead of failing for low level calls members of address this check is not present and that fact should be documented there
0
62,049
17,023,839,826
IssuesEvent
2021-07-03 04:06:59
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Errors in SQL definition
Component: nominatim Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 8.08pm, Sunday, 11th November 2012]** in sql/tables.sql these lines are not needed since placex table is correctly created ``` alter table placex add column geometry_sector INTEGER; alter table placex add column indexed_status INTEGER; alter table placex add column indexed_date TIMESTAMP; ``` 2012-11-02 20:51:10 CET ERROR: column "geometry_sector" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column geometry_sector INTEGER; 2012-11-02 20:51:10 CET ERROR: column "indexed_status" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column indexed_status INTEGER; 2012-11-02 20:51:10 CET ERROR: column "indexed_date" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column indexed_date TIMESTAMP; geometry_sector() is not defined as a function then ``` update placex set geometry_sector = geometry_sector(geometry); ``` create an error 2012-11-02 20:51:10 CET ERROR: function geometry_sector(geometry) does not exist at character 37 2012-11-02 20:51:10 CET HINT: No function matches the given name and argument types. You might need to add explicit type casts. 2012-11-02 20:51:10 CET STATEMENT: update placex set geometry_sector = geometry_sector(geometry); indexed column seem to have been replaced by indexed_status 2 years ago then ``` CREATE INDEX idx_placex_pendingbylatlon ON placex USING BTREE (geometry_index(geometry_sector,indexed,name),rank_search) where geometry_index(geometry_sector,indexed,name) IS NOT NULL; CREATE INDEX idx_placex_interpolation ON placex USING BTREE (geometry_sector) where indexed = false and class='place' and type='houses'; ``` creates error in pgsql 2012-11-11 18:26:05 CET ERROR: column "indexed" does not exist at character 178 2012-11-11 18:26:05 CET STATEMENT: CREATE INDEX idx_placex_pendingbylatlon ON placex USING BTREE (geometry_index(geometry_sector,indexed,name),rank_search) TABLESPACE tbl2 WHERE geometry_index(geometry_sector,indexed,name) IS NOT NULL; 2012-11-11 18:29:21 CET ERROR: column "indexed" does not exist at character 101 2012-11-11 18:29:21 CET STATEMENT: CREATE INDEX idx_placex_interpolation ON placex USING BTREE (geometry_sector) TABLESPACE tbl2 WHERE indexed = false and class='place' and type='houses'; idx_placex_interpolation in created later in sql/indices.src.sql but what about idx_placex_pendingbylatlon?
1.0
Errors in SQL definition - **[Submitted to the original trac issue database at 8.08pm, Sunday, 11th November 2012]** in sql/tables.sql these lines are not needed since placex table is correctly created ``` alter table placex add column geometry_sector INTEGER; alter table placex add column indexed_status INTEGER; alter table placex add column indexed_date TIMESTAMP; ``` 2012-11-02 20:51:10 CET ERROR: column "geometry_sector" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column geometry_sector INTEGER; 2012-11-02 20:51:10 CET ERROR: column "indexed_status" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column indexed_status INTEGER; 2012-11-02 20:51:10 CET ERROR: column "indexed_date" of relation "placex" already exists 2012-11-02 20:51:10 CET STATEMENT: alter table placex add column indexed_date TIMESTAMP; geometry_sector() is not defined as a function then ``` update placex set geometry_sector = geometry_sector(geometry); ``` create an error 2012-11-02 20:51:10 CET ERROR: function geometry_sector(geometry) does not exist at character 37 2012-11-02 20:51:10 CET HINT: No function matches the given name and argument types. You might need to add explicit type casts. 2012-11-02 20:51:10 CET STATEMENT: update placex set geometry_sector = geometry_sector(geometry); indexed column seem to have been replaced by indexed_status 2 years ago then ``` CREATE INDEX idx_placex_pendingbylatlon ON placex USING BTREE (geometry_index(geometry_sector,indexed,name),rank_search) where geometry_index(geometry_sector,indexed,name) IS NOT NULL; CREATE INDEX idx_placex_interpolation ON placex USING BTREE (geometry_sector) where indexed = false and class='place' and type='houses'; ``` creates error in pgsql 2012-11-11 18:26:05 CET ERROR: column "indexed" does not exist at character 178 2012-11-11 18:26:05 CET STATEMENT: CREATE INDEX idx_placex_pendingbylatlon ON placex USING BTREE (geometry_index(geometry_sector,indexed,name),rank_search) TABLESPACE tbl2 WHERE geometry_index(geometry_sector,indexed,name) IS NOT NULL; 2012-11-11 18:29:21 CET ERROR: column "indexed" does not exist at character 101 2012-11-11 18:29:21 CET STATEMENT: CREATE INDEX idx_placex_interpolation ON placex USING BTREE (geometry_sector) TABLESPACE tbl2 WHERE indexed = false and class='place' and type='houses'; idx_placex_interpolation in created later in sql/indices.src.sql but what about idx_placex_pendingbylatlon?
defect
errors in sql definition in sql tables sql these lines are not needed since placex table is correctly created alter table placex add column geometry sector integer alter table placex add column indexed status integer alter table placex add column indexed date timestamp cet error column geometry sector of relation placex already exists cet statement alter table placex add column geometry sector integer cet error column indexed status of relation placex already exists cet statement alter table placex add column indexed status integer cet error column indexed date of relation placex already exists cet statement alter table placex add column indexed date timestamp geometry sector is not defined as a function then update placex set geometry sector geometry sector geometry create an error cet error function geometry sector geometry does not exist at character cet hint no function matches the given name and argument types you might need to add explicit type casts cet statement update placex set geometry sector geometry sector geometry indexed column seem to have been replaced by indexed status years ago then create index idx placex pendingbylatlon on placex using btree geometry index geometry sector indexed name rank search where geometry index geometry sector indexed name is not null create index idx placex interpolation on placex using btree geometry sector where indexed false and class place and type houses creates error in pgsql cet error column indexed does not exist at character cet statement create index idx placex pendingbylatlon on placex using btree geometry index geometry sector indexed name rank search tablespace where geometry index geometry sector indexed name is not null cet error column indexed does not exist at character cet statement create index idx placex interpolation on placex using btree geometry sector tablespace where indexed false and class place and type houses idx placex interpolation in created later in sql indices src sql but what about idx placex pendingbylatlon
1
124,213
17,772,497,122
IssuesEvent
2021-08-30 15:08:06
kapseliboi/core
https://api.github.com/repos/kapseliboi/core
opened
CVE-2019-17592 (High) detected in csv-parse-1.3.3.tgz
security vulnerability
## CVE-2019-17592 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>csv-parse-1.3.3.tgz</b></p></summary> <p>CSV parsing implementing the Node.js `stream.Transform` API</p> <p>Library home page: <a href="https://registry.npmjs.org/csv-parse/-/csv-parse-1.3.3.tgz">https://registry.npmjs.org/csv-parse/-/csv-parse-1.3.3.tgz</a></p> <p> Dependency Hierarchy: - restify-4.3.2.tgz (Root Library) - csv-0.4.6.tgz - :x: **csv-parse-1.3.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/core/commit/e874360a594fee39a58bef623926d2a19086771d">e874360a594fee39a58bef623926d2a19086771d</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The csv-parse module before 4.4.6 for Node.js is vulnerable to Regular Expression Denial of Service. The __isInt() function contains a malformed regular expression that processes large crafted input very slowly. This is triggered when using the cast option. <p>Publish Date: 2019-10-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17592>CVE-2019-17592</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1171">https://www.npmjs.com/advisories/1171</a></p> <p>Release Date: 2019-10-14</p> <p>Fix Resolution: 4.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-17592 (High) detected in csv-parse-1.3.3.tgz - ## CVE-2019-17592 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>csv-parse-1.3.3.tgz</b></p></summary> <p>CSV parsing implementing the Node.js `stream.Transform` API</p> <p>Library home page: <a href="https://registry.npmjs.org/csv-parse/-/csv-parse-1.3.3.tgz">https://registry.npmjs.org/csv-parse/-/csv-parse-1.3.3.tgz</a></p> <p> Dependency Hierarchy: - restify-4.3.2.tgz (Root Library) - csv-0.4.6.tgz - :x: **csv-parse-1.3.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/core/commit/e874360a594fee39a58bef623926d2a19086771d">e874360a594fee39a58bef623926d2a19086771d</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The csv-parse module before 4.4.6 for Node.js is vulnerable to Regular Expression Denial of Service. The __isInt() function contains a malformed regular expression that processes large crafted input very slowly. This is triggered when using the cast option. <p>Publish Date: 2019-10-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17592>CVE-2019-17592</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1171">https://www.npmjs.com/advisories/1171</a></p> <p>Release Date: 2019-10-14</p> <p>Fix Resolution: 4.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in csv parse tgz cve high severity vulnerability vulnerable library csv parse tgz csv parsing implementing the node js stream transform api library home page a href dependency hierarchy restify tgz root library csv tgz x csv parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details the csv parse module before for node js is vulnerable to regular expression denial of service the isint function contains a malformed regular expression that processes large crafted input very slowly this is triggered when using the cast option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
273,036
29,800,384,805
IssuesEvent
2023-06-16 07:38:46
billmcchesney1/foxtrot
https://api.github.com/repos/billmcchesney1/foxtrot
closed
CVE-2017-18214 (High) detected in multiple libraries - autoclosed
Mend: dependency security vulnerability
## CVE-2017-18214 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>moment-2.6.0.min.js</b>, <b>moment-2.15.1.min.js</b>, <b>moment-2.18.1.min.js</b></p></summary> <p> <details><summary><b>moment-2.6.0.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.6.0/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.6.0/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/target/classes/console/index.html</p> <p>Path to vulnerable library: /foxtrot-server/src/main/resources/console/js/moment.min.js,/foxtrot-server/target/classes/console/js/moment.min.js,/foxtrot-server/src/main/resources/console/fql/../js/moment.min.js,/foxtrot-server/src/main/resources/console/js/moment.min.js,/foxtrot-server/target/classes/console/js/moment.min.js,/foxtrot-server/target/classes/console/fql/../js/moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.6.0.min.js** (Vulnerable Library) </details> <details><summary><b>moment-2.15.1.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/src/main/resources/console/echo/index.htm</p> <p>Path to vulnerable library: /foxtrot-server/target/classes/console/echo/js/moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/moment.min.js,/foxtrot-server/target/classes/console/echo/js/moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.15.1.min.js** (Vulnerable Library) </details> <details><summary><b>moment-2.18.1.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/target/classes/console/echo/browse-events.htm</p> <p>Path to vulnerable library: /foxtrot-server/target/classes/console/echo/js/datepicker.moment.min.js,/foxtrot-server/target/classes/console/echo/fql/../js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/fql/../js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/datepicker.moment.min.js,/foxtrot-server/target/classes/console/echo/js/datepicker.moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18214>CVE-2017-18214</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-446m-mv8f-q348">https://github.com/advisories/GHSA-446m-mv8f-q348</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: moment - 2.19.3</p> </p> </details> <p></p>
True
CVE-2017-18214 (High) detected in multiple libraries - autoclosed - ## CVE-2017-18214 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>moment-2.6.0.min.js</b>, <b>moment-2.15.1.min.js</b>, <b>moment-2.18.1.min.js</b></p></summary> <p> <details><summary><b>moment-2.6.0.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.6.0/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.6.0/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/target/classes/console/index.html</p> <p>Path to vulnerable library: /foxtrot-server/src/main/resources/console/js/moment.min.js,/foxtrot-server/target/classes/console/js/moment.min.js,/foxtrot-server/src/main/resources/console/fql/../js/moment.min.js,/foxtrot-server/src/main/resources/console/js/moment.min.js,/foxtrot-server/target/classes/console/js/moment.min.js,/foxtrot-server/target/classes/console/fql/../js/moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.6.0.min.js** (Vulnerable Library) </details> <details><summary><b>moment-2.15.1.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/src/main/resources/console/echo/index.htm</p> <p>Path to vulnerable library: /foxtrot-server/target/classes/console/echo/js/moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/moment.min.js,/foxtrot-server/target/classes/console/echo/js/moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.15.1.min.js** (Vulnerable Library) </details> <details><summary><b>moment-2.18.1.min.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js</a></p> <p>Path to dependency file: /foxtrot-server/target/classes/console/echo/browse-events.htm</p> <p>Path to vulnerable library: /foxtrot-server/target/classes/console/echo/js/datepicker.moment.min.js,/foxtrot-server/target/classes/console/echo/fql/../js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/fql/../js/datepicker.moment.min.js,/foxtrot-server/src/main/resources/console/echo/js/datepicker.moment.min.js,/foxtrot-server/target/classes/console/echo/js/datepicker.moment.min.js</p> <p> Dependency Hierarchy: - :x: **moment-2.18.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18214>CVE-2017-18214</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-446m-mv8f-q348">https://github.com/advisories/GHSA-446m-mv8f-q348</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: moment - 2.19.3</p> </p> </details> <p></p>
non_defect
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries moment min js moment min js moment min js moment min js parse validate manipulate and display dates library home page a href path to dependency file foxtrot server target classes console index html path to vulnerable library foxtrot server src main resources console js moment min js foxtrot server target classes console js moment min js foxtrot server src main resources console fql js moment min js foxtrot server src main resources console js moment min js foxtrot server target classes console js moment min js foxtrot server target classes console fql js moment min js dependency hierarchy x moment min js vulnerable library moment min js parse validate manipulate and display dates library home page a href path to dependency file foxtrot server src main resources console echo index htm path to vulnerable library foxtrot server target classes console echo js moment min js foxtrot server src main resources console echo js moment min js foxtrot server src main resources console echo js moment min js foxtrot server target classes console echo js moment min js dependency hierarchy x moment min js vulnerable library moment min js parse validate manipulate and display dates library home page a href path to dependency file foxtrot server target classes console echo browse events htm path to vulnerable library foxtrot server target classes console echo js datepicker moment min js foxtrot server target classes console echo fql js datepicker moment min js foxtrot server src main resources console echo js datepicker moment min js foxtrot server src main resources console echo fql js datepicker moment min js foxtrot server src main resources console echo js datepicker moment min js foxtrot server target classes console echo js datepicker moment min js dependency hierarchy x moment min js vulnerable library found in head commit a href found in base branch master vulnerability details the moment module before for node js is prone to a regular expression denial of service via a crafted date string a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment
0
286
2,523,046,960
IssuesEvent
2015-01-20 06:06:27
AtlasOfLivingAustralia/biocache-hubs
https://api.github.com/repos/AtlasOfLivingAustralia/biocache-hubs
closed
AVH hub: advanced search form changes
bug priority-medium status-fixed status-new type-defect
*migrated from:* https://code.google.com/p/ala/issues/detail?id=642 *date:* Sat Apr 19 20:34:32 2014 *author:* nickdos --- From Niels: Would it be possible, in the Advanced search form, to remove the full text search from the Specimen section and place it in a section of its own right at the top? The section header can be 'Full text search' and the label for the text field can just be 'Text'. I have started using SOLR for Flora of Victoria, so I know much better now how to query and have discovered that you can put pretty much any query string into the Full text search field and get the results you want (as long as you enclose the entire string inside with parentheses). Quite excited about that, so we are writing an AVH News blog entry about the extra things that you can do with the Full text search that you can't already do in the Advanced query form. So we think the Full text search will become more important for AVH users and deserves a more prominent place. Will clean up the form nicely too, as we only put it under Specimen for want of a better place. Thanks, Niels
1.0
AVH hub: advanced search form changes - *migrated from:* https://code.google.com/p/ala/issues/detail?id=642 *date:* Sat Apr 19 20:34:32 2014 *author:* nickdos --- From Niels: Would it be possible, in the Advanced search form, to remove the full text search from the Specimen section and place it in a section of its own right at the top? The section header can be 'Full text search' and the label for the text field can just be 'Text'. I have started using SOLR for Flora of Victoria, so I know much better now how to query and have discovered that you can put pretty much any query string into the Full text search field and get the results you want (as long as you enclose the entire string inside with parentheses). Quite excited about that, so we are writing an AVH News blog entry about the extra things that you can do with the Full text search that you can't already do in the Advanced query form. So we think the Full text search will become more important for AVH users and deserves a more prominent place. Will clean up the form nicely too, as we only put it under Specimen for want of a better place. Thanks, Niels
defect
avh hub advanced search form changes migrated from date sat apr author nickdos from niels would it be possible in the advanced search form to remove the full text search from the specimen section and place it in a section of its own right at the top the section header can be full text search and the label for the text field can just be text i have started using solr for flora of victoria so i know much better now how to query and have discovered that you can put pretty much any query string into the full text search field and get the results you want as long as you enclose the entire string inside with parentheses quite excited about that so we are writing an avh news blog entry about the extra things that you can do with the full text search that you can t already do in the advanced query form so we think the full text search will become more important for avh users and deserves a more prominent place will clean up the form nicely too as we only put it under specimen for want of a better place thanks niels
1
287,685
21,670,994,015
IssuesEvent
2022-05-08 00:15:04
aws/amazon-vpc-cni-k8s
https://api.github.com/repos/aws/amazon-vpc-cni-k8s
closed
CNI log collector script not working with bottlerocket
bug enhancement help wanted documentation stale
<!-- For urgent operational issues, please contact AWS Support directly at https://aws.amazon.com/premiumsupport/ If you think you have found a potential security issue, please do not post it as an issue. Instead, follow the instructions at https://aws.amazon.com/security/vulnerability-reporting/ or email AWS Security directly at aws-security@amazon.com --> **What happened**: <!-- Include log lines if possible --> There are recent requests mentioning CNI log collector script is not working with bottlerocket. **Attach logs** <!-- Please upload the logs by running [CNI Log Collection tool] since it will help faster resolution `sudo bash /opt/cni/bin/aws-cni-support.sh` --> **What you expected to happen**: Script should work as expected. **How to reproduce it (as minimally and precisely as possible)**: Run the log collector script. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`) - CNI Version - N/A - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`):
1.0
CNI log collector script not working with bottlerocket - <!-- For urgent operational issues, please contact AWS Support directly at https://aws.amazon.com/premiumsupport/ If you think you have found a potential security issue, please do not post it as an issue. Instead, follow the instructions at https://aws.amazon.com/security/vulnerability-reporting/ or email AWS Security directly at aws-security@amazon.com --> **What happened**: <!-- Include log lines if possible --> There are recent requests mentioning CNI log collector script is not working with bottlerocket. **Attach logs** <!-- Please upload the logs by running [CNI Log Collection tool] since it will help faster resolution `sudo bash /opt/cni/bin/aws-cni-support.sh` --> **What you expected to happen**: Script should work as expected. **How to reproduce it (as minimally and precisely as possible)**: Run the log collector script. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`) - CNI Version - N/A - OS (e.g: `cat /etc/os-release`): - Kernel (e.g. `uname -a`):
non_defect
cni log collector script not working with bottlerocket for urgent operational issues please contact aws support directly at if you think you have found a potential security issue please do not post it as an issue instead follow the instructions at or email aws security directly at aws security amazon com what happened include log lines if possible there are recent requests mentioning cni log collector script is not working with bottlerocket attach logs please upload the logs by running since it will help faster resolution sudo bash opt cni bin aws cni support sh what you expected to happen script should work as expected how to reproduce it as minimally and precisely as possible run the log collector script anything else we need to know environment kubernetes version use kubectl version cni version n a os e g cat etc os release kernel e g uname a
0
123,813
12,218,931,831
IssuesEvent
2020-05-01 20:28:21
forgiv/backend-template
https://api.github.com/repos/forgiv/backend-template
closed
Add Example Env
documentation
Making users rely on the README to know which environment variables are required is kinda lame. Let's add a `.example.env` file with sane defaults that users can rename and modify.
1.0
Add Example Env - Making users rely on the README to know which environment variables are required is kinda lame. Let's add a `.example.env` file with sane defaults that users can rename and modify.
non_defect
add example env making users rely on the readme to know which environment variables are required is kinda lame let s add a example env file with sane defaults that users can rename and modify
0
133,681
18,299,035,641
IssuesEvent
2021-10-05 23:55:06
bsbtd/Teste
https://api.github.com/repos/bsbtd/Teste
opened
CVE-2017-7957 (High) detected in xstream-1.3.1.jar
security vulnerability
## CVE-2017-7957 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary> <p></p> <p>Path to vulnerable library: 1.jar</p> <p> Dependency Hierarchy: - :x: **xstream-1.3.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream through 1.4.9, when a certain denyTypes workaround is not used, mishandles attempts to create an instance of the primitive type 'void' during unmarshalling, leading to a remote application crash, as demonstrated by an xstream.fromXML("<void/>") call. <p>Publish Date: 2017-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7957>CVE-2017-7957</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://x-stream.github.io/CVE-2017-7957.html">http://x-stream.github.io/CVE-2017-7957.html</a></p> <p>Release Date: 2017-04-29</p> <p>Fix Resolution: 1.4.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-7957 (High) detected in xstream-1.3.1.jar - ## CVE-2017-7957 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary> <p></p> <p>Path to vulnerable library: 1.jar</p> <p> Dependency Hierarchy: - :x: **xstream-1.3.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream through 1.4.9, when a certain denyTypes workaround is not used, mishandles attempts to create an instance of the primitive type 'void' during unmarshalling, leading to a remote application crash, as demonstrated by an xstream.fromXML("<void/>") call. <p>Publish Date: 2017-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7957>CVE-2017-7957</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://x-stream.github.io/CVE-2017-7957.html">http://x-stream.github.io/CVE-2017-7957.html</a></p> <p>Release Date: 2017-04-29</p> <p>Fix Resolution: 1.4.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar path to vulnerable library jar dependency hierarchy x xstream jar vulnerable library found in head commit a href vulnerability details xstream through when a certain denytypes workaround is not used mishandles attempts to create an instance of the primitive type void during unmarshalling leading to a remote application crash as demonstrated by an xstream fromxml call publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
208,741
7,157,919,193
IssuesEvent
2018-01-26 21:50:50
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Bow damage is not increased when investing in the Bow Damage Skill
High Priority
The skill node seems to do nothing at the moment and I want to one shot wolves gawd dammit.
1.0
Bow damage is not increased when investing in the Bow Damage Skill - The skill node seems to do nothing at the moment and I want to one shot wolves gawd dammit.
non_defect
bow damage is not increased when investing in the bow damage skill the skill node seems to do nothing at the moment and i want to one shot wolves gawd dammit
0
191,396
15,287,334,288
IssuesEvent
2021-02-23 15:39:33
Informasjonsforvaltning/fdk-issue-tracker
https://api.github.com/repos/Informasjonsforvaltning/fdk-issue-tracker
opened
Gjennomgang av logging i applikasjonene/GCP
backend cloud devops documentation monitoring
Kvaliteten på loggingen varierer for mye fra applikasjon til applikasjon. - [ ] Gjennomgå eksisterende applikasjoner - [ ] Fikse loggnivå slik at dette blir tolket riktig uavhengig av applikasjon - [ ] Ordne dokumentasjon som beskriver hva som forventes
1.0
Gjennomgang av logging i applikasjonene/GCP - Kvaliteten på loggingen varierer for mye fra applikasjon til applikasjon. - [ ] Gjennomgå eksisterende applikasjoner - [ ] Fikse loggnivå slik at dette blir tolket riktig uavhengig av applikasjon - [ ] Ordne dokumentasjon som beskriver hva som forventes
non_defect
gjennomgang av logging i applikasjonene gcp kvaliteten på loggingen varierer for mye fra applikasjon til applikasjon gjennomgå eksisterende applikasjoner fikse loggnivå slik at dette blir tolket riktig uavhengig av applikasjon ordne dokumentasjon som beskriver hva som forventes
0
48,398
13,068,506,033
IssuesEvent
2020-07-31 03:47:40
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[filterscripts] MESE Filter Throws Key Error (Trac #2331)
Migrated from Trac defect jeb + pnf
This is with the latest trunk of combo r174135. ERROR (I3Module): MeseFilter_precut: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py", line 387, in <module> main(opts) File "/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py", line 344, in main tray.Execute() File "/home/olivas/icecube/combo/trunk/build/lib/I3Tray.py", line 256, in Execute super(I3Tray, self).Execute() File "/home/olivas/icecube/combo/trunk/build/lib/icecube/filterscripts/mesefilter.py", line 127, in precut layer_veto_charge = frame['L4VetoLayer0'].value + frame['L4VetoLayer1'].value KeyError: 'L4VetoLayer0' Migrated from https://code.icecube.wisc.edu/ticket/2331 ```json { "status": "closed", "changetime": "2019-06-28T22:27:43", "description": "This is with the latest trunk of combo r174135.\n\nERROR (I3Module): MeseFilter_precut: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py\", line 387, in <module>\n main(opts)\n File \"/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py\", line 344, in main\n tray.Execute()\n File \"/home/olivas/icecube/combo/trunk/build/lib/I3Tray.py\", line 256, in Execute\n super(I3Tray, self).Execute()\n File \"/home/olivas/icecube/combo/trunk/build/lib/icecube/filterscripts/mesefilter.py\", line 127, in precut\n layer_veto_charge = frame['L4VetoLayer0'].value + frame['L4VetoLayer1'].value\nKeyError: 'L4VetoLayer0'\n", "reporter": "olivas", "cc": "", "resolution": "fixed", "_ts": "1561760863704892", "component": "jeb + pnf", "summary": "[filterscripts] MESE Filter Throws Key Error", "priority": "blocker", "keywords": "", "time": "2019-06-22T02:23:41", "milestone": "Autumnal Equinox 2019", "owner": "blaufuss", "type": "defect" } ```
1.0
[filterscripts] MESE Filter Throws Key Error (Trac #2331) - This is with the latest trunk of combo r174135. ERROR (I3Module): MeseFilter_precut: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py", line 387, in <module> main(opts) File "/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py", line 344, in main tray.Execute() File "/home/olivas/icecube/combo/trunk/build/lib/I3Tray.py", line 256, in Execute super(I3Tray, self).Execute() File "/home/olivas/icecube/combo/trunk/build/lib/icecube/filterscripts/mesefilter.py", line 127, in precut layer_veto_charge = frame['L4VetoLayer0'].value + frame['L4VetoLayer1'].value KeyError: 'L4VetoLayer0' Migrated from https://code.icecube.wisc.edu/ticket/2331 ```json { "status": "closed", "changetime": "2019-06-28T22:27:43", "description": "This is with the latest trunk of combo r174135.\n\nERROR (I3Module): MeseFilter_precut: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py\", line 387, in <module>\n main(opts)\n File \"/home/olivas/icecube/combo/trunk/build/filterscripts/resources/scripts/SimulationFiltering.py\", line 344, in main\n tray.Execute()\n File \"/home/olivas/icecube/combo/trunk/build/lib/I3Tray.py\", line 256, in Execute\n super(I3Tray, self).Execute()\n File \"/home/olivas/icecube/combo/trunk/build/lib/icecube/filterscripts/mesefilter.py\", line 127, in precut\n layer_veto_charge = frame['L4VetoLayer0'].value + frame['L4VetoLayer1'].value\nKeyError: 'L4VetoLayer0'\n", "reporter": "olivas", "cc": "", "resolution": "fixed", "_ts": "1561760863704892", "component": "jeb + pnf", "summary": "[filterscripts] MESE Filter Throws Key Error", "priority": "blocker", "keywords": "", "time": "2019-06-22T02:23:41", "milestone": "Autumnal Equinox 2019", "owner": "blaufuss", "type": "defect" } ```
defect
mese filter throws key error trac this is with the latest trunk of combo error mesefilter precut exception thrown cxx in void do void traceback most recent call last file home olivas icecube combo trunk build filterscripts resources scripts simulationfiltering py line in main opts file home olivas icecube combo trunk build filterscripts resources scripts simulationfiltering py line in main tray execute file home olivas icecube combo trunk build lib py line in execute super self execute file home olivas icecube combo trunk build lib icecube filterscripts mesefilter py line in precut layer veto charge frame value frame value keyerror migrated from json status closed changetime description this is with the latest trunk of combo n nerror mesefilter precut exception thrown cxx in void do void ntraceback most recent call last n file home olivas icecube combo trunk build filterscripts resources scripts simulationfiltering py line in n main opts n file home olivas icecube combo trunk build filterscripts resources scripts simulationfiltering py line in main n tray execute n file home olivas icecube combo trunk build lib py line in execute n super self execute n file home olivas icecube combo trunk build lib icecube filterscripts mesefilter py line in precut n layer veto charge frame value frame value nkeyerror n reporter olivas cc resolution fixed ts component jeb pnf summary mese filter throws key error priority blocker keywords time milestone autumnal equinox owner blaufuss type defect
1
62,878
15,376,155,326
IssuesEvent
2021-03-02 15:40:58
AdoptOpenJDK/openjdk-build
https://api.github.com/repos/AdoptOpenJDK/openjdk-build
closed
jdk11u-linux-x64-dragonwell: compilation errors
buildbreak dragonwell/alibaba x-linux
see: https://ci.adoptopenjdk.net/view/Failing%20Builds/job/build-scripts/job/jobs/job/jdk11u/job/jdk11u-linux-x64-dragonwell/164/console ``` 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: expected identifier before ';' token 17:54:38 224 | jclass class; 17:54:38 | ^ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: multiple types in one declaration 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: declaration does not declare anything [-fpermissive] 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: expected identifier before ';' token 17:54:38 284 | jclass class; /* current class */ 17:54:38 | ^ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: multiple types in one declaration 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: declaration does not declare anything [-fpermissive] 17:54:38 ./src/java.base/share/native/libverify/check_code.c:352:14: error: expected unqualified-id before 'protected' 17:54:38 352 | unsigned protected:1; /* must accessor be a subclass of "this" */ 17:54:38 | ^~~~~~~~~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c: In function 'void finalize_class_hash(context_type*)': 17:54:38 ./src/java.base/share/native/libverify/check_code.c:515:21: error: expected unqualified-id before 'class' 17:54:38 515 | if (bucket->class) { 17:54:38 | ^~~~~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:515:21: error: expected ')' before 'class' 17:54:38 515 | if (bucket->class) { 17:54:38 | ~ ^~~~~ 17:54:38 | ) 17:54:38 ./src/java.base/share/native/libverify/check_code.c:516:19: error: base operand of '->' has non-pointer type 'JNIEnv' {aka 'JNIEnv_'} 17:54:38 516 | (*env)->DeleteGlobalRef(env, bucket->class); 17:54:38 | ^~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:516:50: error: expected unqualified-id before 'class' 17:54:38 516 | (*env)->DeleteGlobalRef(env, bucket->class); 17:54:38 | ^~~~~ ```
1.0
jdk11u-linux-x64-dragonwell: compilation errors - see: https://ci.adoptopenjdk.net/view/Failing%20Builds/job/build-scripts/job/jobs/job/jdk11u/job/jdk11u-linux-x64-dragonwell/164/console ``` 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: expected identifier before ';' token 17:54:38 224 | jclass class; 17:54:38 | ^ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: multiple types in one declaration 17:54:38 ./src/java.base/share/native/libverify/check_code.c:224:17: error: declaration does not declare anything [-fpermissive] 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: expected identifier before ';' token 17:54:38 284 | jclass class; /* current class */ 17:54:38 | ^ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: multiple types in one declaration 17:54:38 ./src/java.base/share/native/libverify/check_code.c:284:17: error: declaration does not declare anything [-fpermissive] 17:54:38 ./src/java.base/share/native/libverify/check_code.c:352:14: error: expected unqualified-id before 'protected' 17:54:38 352 | unsigned protected:1; /* must accessor be a subclass of "this" */ 17:54:38 | ^~~~~~~~~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c: In function 'void finalize_class_hash(context_type*)': 17:54:38 ./src/java.base/share/native/libverify/check_code.c:515:21: error: expected unqualified-id before 'class' 17:54:38 515 | if (bucket->class) { 17:54:38 | ^~~~~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:515:21: error: expected ')' before 'class' 17:54:38 515 | if (bucket->class) { 17:54:38 | ~ ^~~~~ 17:54:38 | ) 17:54:38 ./src/java.base/share/native/libverify/check_code.c:516:19: error: base operand of '->' has non-pointer type 'JNIEnv' {aka 'JNIEnv_'} 17:54:38 516 | (*env)->DeleteGlobalRef(env, bucket->class); 17:54:38 | ^~ 17:54:38 ./src/java.base/share/native/libverify/check_code.c:516:50: error: expected unqualified-id before 'class' 17:54:38 516 | (*env)->DeleteGlobalRef(env, bucket->class); 17:54:38 | ^~~~~ ```
non_defect
linux dragonwell compilation errors see src java base share native libverify check code c error expected identifier before token jclass class src java base share native libverify check code c error multiple types in one declaration src java base share native libverify check code c error declaration does not declare anything src java base share native libverify check code c error expected identifier before token jclass class current class src java base share native libverify check code c error multiple types in one declaration src java base share native libverify check code c error declaration does not declare anything src java base share native libverify check code c error expected unqualified id before protected unsigned protected must accessor be a subclass of this src java base share native libverify check code c in function void finalize class hash context type src java base share native libverify check code c error expected unqualified id before class if bucket class src java base share native libverify check code c error expected before class if bucket class src java base share native libverify check code c error base operand of has non pointer type jnienv aka jnienv env deleteglobalref env bucket class src java base share native libverify check code c error expected unqualified id before class env deleteglobalref env bucket class
0
37,590
6,622,587,394
IssuesEvent
2017-09-22 00:55:46
borgbackup/borg
https://api.github.com/repos/borgbackup/borg
closed
borg create doc: need example update
documentation
1. The example section of borg create in 1.1+ ( https://borgbackup.readthedocs.io/en/1.1.0rc3/usage/create.html ) shows: ``` # No compression (default) $ borg create /path/to/repo::arch ~ # Super fast, low compression $ borg create --compression lz4 /path/to/repo::arch ~ ``` However elsewhere it says lz4 is now the default. 2. That same page should also reports about the auto compression specifier, which is documented here: https://borgbackup.readthedocs.io/en/1.1.0rc3/usage/help.html 3. It would be nice to explain when using a compression algorithm if the compressed size is the same or higher than the original, what is saved in the repo? The original or the compressed version? It would make sense to save the original, so to have a faster decompression when accessing the archive (also having a similar result to auto but with a better chance to compress, since lz4 currently used for the compression estimate may not be able to detect some partially compressed chunks, that may be compressed with a better algo, at the expense of some higher CPU load during create).
1.0
borg create doc: need example update - 1. The example section of borg create in 1.1+ ( https://borgbackup.readthedocs.io/en/1.1.0rc3/usage/create.html ) shows: ``` # No compression (default) $ borg create /path/to/repo::arch ~ # Super fast, low compression $ borg create --compression lz4 /path/to/repo::arch ~ ``` However elsewhere it says lz4 is now the default. 2. That same page should also reports about the auto compression specifier, which is documented here: https://borgbackup.readthedocs.io/en/1.1.0rc3/usage/help.html 3. It would be nice to explain when using a compression algorithm if the compressed size is the same or higher than the original, what is saved in the repo? The original or the compressed version? It would make sense to save the original, so to have a faster decompression when accessing the archive (also having a similar result to auto but with a better chance to compress, since lz4 currently used for the compression estimate may not be able to detect some partially compressed chunks, that may be compressed with a better algo, at the expense of some higher CPU load during create).
non_defect
borg create doc need example update the example section of borg create in shows no compression default borg create path to repo arch super fast low compression borg create compression path to repo arch however elsewhere it says is now the default that same page should also reports about the auto compression specifier which is documented here it would be nice to explain when using a compression algorithm if the compressed size is the same or higher than the original what is saved in the repo the original or the compressed version it would make sense to save the original so to have a faster decompression when accessing the archive also having a similar result to auto but with a better chance to compress since currently used for the compression estimate may not be able to detect some partially compressed chunks that may be compressed with a better algo at the expense of some higher cpu load during create
0
29,111
5,537,752,008
IssuesEvent
2017-03-21 23:02:09
extnet/Ext.NET
https://api.github.com/repos/extnet/Ext.NET
closed
HyperlinkButton disables after click
4.x defect
The Ext.Net.HyperlinkButton (Ext.net.HyperlinkButton) is toggling its enabled state every time it is clicked. Originally the component had the `toggle()` method to toggle its state, which is being called by current ExtJS button parent class probably as a feature with buttons that support being toggled on/off (as clicked/depressed and not clicked/not depressed). A simple way out of this would be just to call `item.toggle()` on the hyperlink button's click handler. The HyperlinkButton is an Ext.NET-specific feature. The problem was introduced by a call of `.toggle()` from the default ExtJS button click handler.
1.0
HyperlinkButton disables after click - The Ext.Net.HyperlinkButton (Ext.net.HyperlinkButton) is toggling its enabled state every time it is clicked. Originally the component had the `toggle()` method to toggle its state, which is being called by current ExtJS button parent class probably as a feature with buttons that support being toggled on/off (as clicked/depressed and not clicked/not depressed). A simple way out of this would be just to call `item.toggle()` on the hyperlink button's click handler. The HyperlinkButton is an Ext.NET-specific feature. The problem was introduced by a call of `.toggle()` from the default ExtJS button click handler.
defect
hyperlinkbutton disables after click the ext net hyperlinkbutton ext net hyperlinkbutton is toggling its enabled state every time it is clicked originally the component had the toggle method to toggle its state which is being called by current extjs button parent class probably as a feature with buttons that support being toggled on off as clicked depressed and not clicked not depressed a simple way out of this would be just to call item toggle on the hyperlink button s click handler the hyperlinkbutton is an ext net specific feature the problem was introduced by a call of toggle from the default extjs button click handler
1
80,095
30,010,088,874
IssuesEvent
2023-06-26 14:45:14
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
closed
Got asked twice about verification
T-Defect A-E2EE-SAS-Verification S-Minor O-Occasional
### Steps to reproduce - Have the Android App open - Sign in with Element Web to the same account - Tap the verification banner on Android - Verification starts, scan QR Code, finish verification ### Outcome #### What did you expect? Verification done #### What happened instead? Verification banner is displayed again ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store 1.5.30 ### Homeserver _No response_ ### Will you send logs? Yes ### Are you willing to provide a PR? No
1.0
Got asked twice about verification - ### Steps to reproduce - Have the Android App open - Sign in with Element Web to the same account - Tap the verification banner on Android - Verification starts, scan QR Code, finish verification ### Outcome #### What did you expect? Verification done #### What happened instead? Verification banner is displayed again ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store 1.5.30 ### Homeserver _No response_ ### Will you send logs? Yes ### Are you willing to provide a PR? No
defect
got asked twice about verification steps to reproduce have the android app open sign in with element web to the same account tap the verification banner on android verification starts scan qr code finish verification outcome what did you expect verification done what happened instead verification banner is displayed again your phone model no response operating system version no response application version and app store homeserver no response will you send logs yes are you willing to provide a pr no
1
254,287
21,777,086,973
IssuesEvent
2022-05-13 14:46:39
ossf/scorecard-action
https://api.github.com/repos/ossf/scorecard-action
closed
Failed to run e2e test-organization-ls/scorecard-action-private-repo-tests
e2e automated-tests
Repo: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main \n Run: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/actions/runs/2310978708 \n Workflow name: Scorecards-golang \n Workflow file: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main/.github/workflows/Scorecards-golang.yml \n Trigger: schedule \n Branch: main \n Date: Thu May 12 03:24:34 UTC 2022
1.0
Failed to run e2e test-organization-ls/scorecard-action-private-repo-tests - Repo: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main \n Run: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/actions/runs/2310978708 \n Workflow name: Scorecards-golang \n Workflow file: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main/.github/workflows/Scorecards-golang.yml \n Trigger: schedule \n Branch: main \n Date: Thu May 12 03:24:34 UTC 2022
non_defect
failed to run test organization ls scorecard action private repo tests repo n run n workflow name scorecards golang n workflow file n trigger schedule n branch main n date thu may utc
0
53,148
13,788,159,618
IssuesEvent
2020-10-09 06:39:49
BillGR17/SEN
https://api.github.com/repos/BillGR17/SEN
closed
CVE-2018-1000620 (High) detected in cryptiles-2.0.5.tgz
bug security vulnerability
## CVE-2018-1000620 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptiles-2.0.5.tgz</b></p></summary> <p>General purpose crypto utilities</p> <p>Library home page: <a href="https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz">https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/SEN/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/SEN/node_modules/sqlite3/node_modules/cryptiles/package.json</p> <p> Dependency Hierarchy: - sqlite3-3.1.13.tgz (Root Library) - node-pre-gyp-0.6.38.tgz - hawk-3.1.3.tgz - :x: **cryptiles-2.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BillGR17/SEN/commit/ae3a979a8ced8cd7f8704c9cb4dc24d9adf7f151">ae3a979a8ced8cd7f8704c9cb4dc24d9adf7f151</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2. <p>Publish Date: 2018-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000620>CVE-2018-1000620</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620</a></p> <p>Release Date: 2018-07-09</p> <p>Fix Resolution: v4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-1000620 (High) detected in cryptiles-2.0.5.tgz - ## CVE-2018-1000620 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptiles-2.0.5.tgz</b></p></summary> <p>General purpose crypto utilities</p> <p>Library home page: <a href="https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz">https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/SEN/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/SEN/node_modules/sqlite3/node_modules/cryptiles/package.json</p> <p> Dependency Hierarchy: - sqlite3-3.1.13.tgz (Root Library) - node-pre-gyp-0.6.38.tgz - hawk-3.1.3.tgz - :x: **cryptiles-2.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/BillGR17/SEN/commit/ae3a979a8ced8cd7f8704c9cb4dc24d9adf7f151">ae3a979a8ced8cd7f8704c9cb4dc24d9adf7f151</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2. <p>Publish Date: 2018-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000620>CVE-2018-1000620</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000620</a></p> <p>Release Date: 2018-07-09</p> <p>Fix Resolution: v4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in cryptiles tgz cve high severity vulnerability vulnerable library cryptiles tgz general purpose crypto utilities library home page a href path to dependency file tmp ws scm sen package json path to vulnerable library tmp ws scm sen node modules node modules cryptiles package json dependency hierarchy tgz root library node pre gyp tgz hawk tgz x cryptiles tgz vulnerable library found in head commit a href vulnerability details eran hammer cryptiles version earlier contains a cwe insufficient entropy vulnerability in randomdigits method that can result in an attacker is more likely to be able to brute force something that was supposed to be random this attack appear to be exploitable via depends upon the calling application this vulnerability appears to have been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
31,865
6,010,525,711
IssuesEvent
2017-06-06 13:24:44
Theano/Theano
https://api.github.com/repos/Theano/Theano
closed
Document options for tradeoff between compilation speed and run time
CCW Documentation
We should document that those are a few tradeoff between compilation and speed; first faster compile, slower run time. last slower compile, faster run time. in betwen intermediate trade off Theano flags to use: - optimizer=fast_compile - optimizer=fast_compile,optimizer_including=fusion - optimizer_excluding=inplace - optimizer_excluding=elemwise_inplace - (default)
1.0
Document options for tradeoff between compilation speed and run time - We should document that those are a few tradeoff between compilation and speed; first faster compile, slower run time. last slower compile, faster run time. in betwen intermediate trade off Theano flags to use: - optimizer=fast_compile - optimizer=fast_compile,optimizer_including=fusion - optimizer_excluding=inplace - optimizer_excluding=elemwise_inplace - (default)
non_defect
document options for tradeoff between compilation speed and run time we should document that those are a few tradeoff between compilation and speed first faster compile slower run time last slower compile faster run time in betwen intermediate trade off theano flags to use optimizer fast compile optimizer fast compile optimizer including fusion optimizer excluding inplace optimizer excluding elemwise inplace default
0
79,229
28,052,819,280
IssuesEvent
2023-03-29 07:21:12
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
"Jump to date" is forcibly disabled in "Labs".
T-Defect
### Steps to reproduce n/a ### Outcome #### What did you expect? n/a #### What happened instead? I updated Synapse to 1.80.0 a while ago. After updating to 1.11.26 (both web and desktop), I found that "jump to date" is not available. This feature is disabled in "labs" and cannot be manually enabled. ![{A7406C30-7C80-AF9F-C711-DFA243581EE7}](https://user-images.githubusercontent.com/48104960/228371221-26e91bd5-476f-4473-bfba-c0681bd4d1d7.png) **Strangely, in a specific version (1.11.23), and Synapse version 1.80.0, this feature is still available.** P.S.: 1.11.23 doesn't have "Jump to Date" option in "Labs", but it can be enabled manually by adding a key `"feature_jump_to_date":true` in DevTools. In the case where I found the above problem, Synapse's configuration file `homeserver.yaml` already has the following content: ``` experimental_features: msc3030_enabled: true ``` `GET /_matrix/client/versions` ``` {"versions":["r0.0.1","r0.1.0","r0.2.0","r0.3.0","r0.4.0","r0.5.0","r0.6.0","r0 .6.1","v1.1","v1.2","v1.3","v1.4","v1.5"],"unstable_features":{"org.matrix.label_based_filtering":true, "org.matrix.e2e_cross_signing":true,"org.matrix.msc2432":true,"uk.half-shot.msc2666.mutual_rooms":true,"io.element.e2ee_forced.public":false,"io.element .e2ee_forced.private": false,"io.element.e2ee_forced.trusted_private":false,"org.matrix.msc3026.busy_presence":false,"org.matrix.msc2285.stable":true,"org.matrix.msc3827 .stable":true,"org.matrix.msc2716":false,"org.matrix.msc3440.stable":true,"org.matrix.msc3771":true,"org.matrix.msc3773":false,"fi .mau.msc2815":false,"fi.mau.msc2659":false,"org.matrix.msc3882":true,"org.matrix.msc3881":true,"org.matrix.msc3874":false,"org .matrix.msc3886":true,"org.matrix.msc3912":false,"org.matrix.msc3952_intentional_mentions":false}} ``` **Since I'm just a user, please let me know if I missed something.** ### Operating system x64 Windows 10 ### Browser information Chromium 102.0.5005.167 ### URL for webapp 9ec10a274ec5-react-f5115e047e14-js-c8503b312036 ### Application version Element Desktop 1.11.26 ### Homeserver Synapse 1.80.0 ### Will you send logs? No
1.0
"Jump to date" is forcibly disabled in "Labs". - ### Steps to reproduce n/a ### Outcome #### What did you expect? n/a #### What happened instead? I updated Synapse to 1.80.0 a while ago. After updating to 1.11.26 (both web and desktop), I found that "jump to date" is not available. This feature is disabled in "labs" and cannot be manually enabled. ![{A7406C30-7C80-AF9F-C711-DFA243581EE7}](https://user-images.githubusercontent.com/48104960/228371221-26e91bd5-476f-4473-bfba-c0681bd4d1d7.png) **Strangely, in a specific version (1.11.23), and Synapse version 1.80.0, this feature is still available.** P.S.: 1.11.23 doesn't have "Jump to Date" option in "Labs", but it can be enabled manually by adding a key `"feature_jump_to_date":true` in DevTools. In the case where I found the above problem, Synapse's configuration file `homeserver.yaml` already has the following content: ``` experimental_features: msc3030_enabled: true ``` `GET /_matrix/client/versions` ``` {"versions":["r0.0.1","r0.1.0","r0.2.0","r0.3.0","r0.4.0","r0.5.0","r0.6.0","r0 .6.1","v1.1","v1.2","v1.3","v1.4","v1.5"],"unstable_features":{"org.matrix.label_based_filtering":true, "org.matrix.e2e_cross_signing":true,"org.matrix.msc2432":true,"uk.half-shot.msc2666.mutual_rooms":true,"io.element.e2ee_forced.public":false,"io.element .e2ee_forced.private": false,"io.element.e2ee_forced.trusted_private":false,"org.matrix.msc3026.busy_presence":false,"org.matrix.msc2285.stable":true,"org.matrix.msc3827 .stable":true,"org.matrix.msc2716":false,"org.matrix.msc3440.stable":true,"org.matrix.msc3771":true,"org.matrix.msc3773":false,"fi .mau.msc2815":false,"fi.mau.msc2659":false,"org.matrix.msc3882":true,"org.matrix.msc3881":true,"org.matrix.msc3874":false,"org .matrix.msc3886":true,"org.matrix.msc3912":false,"org.matrix.msc3952_intentional_mentions":false}} ``` **Since I'm just a user, please let me know if I missed something.** ### Operating system x64 Windows 10 ### Browser information Chromium 102.0.5005.167 ### URL for webapp 9ec10a274ec5-react-f5115e047e14-js-c8503b312036 ### Application version Element Desktop 1.11.26 ### Homeserver Synapse 1.80.0 ### Will you send logs? No
defect
jump to date is forcibly disabled in labs steps to reproduce n a outcome what did you expect n a what happened instead i updated synapse to a while ago after updating to both web and desktop i found that jump to date is not available this feature is disabled in labs and cannot be manually enabled strangely in a specific version and synapse version this feature is still available p s doesn t have jump to date option in labs but it can be enabled manually by adding a key feature jump to date true in devtools in the case where i found the above problem synapse s configuration file homeserver yaml already has the following content experimental features enabled true get matrix client versions versions unstable features org matrix label based filtering true org matrix cross signing true org matrix true uk half shot mutual rooms true io element forced public false io element forced private false io element forced trusted private false org matrix busy presence false org matrix stable true org matrix stable true org matrix false org matrix stable true org matrix true org matrix false fi mau false fi mau false org matrix true org matrix true org matrix false org matrix true org matrix false org matrix intentional mentions false since i m just a user please let me know if i missed something operating system windows browser information chromium url for webapp react js application version element desktop homeserver synapse will you send logs no
1
52,460
13,224,736,866
IssuesEvent
2020-08-17 19:44:37
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[clsim] assert in client module fails with no photons (Trac #2241)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2241">https://code.icecube.wisc.edu/projects/icecube/ticket/2241</a>, reported by kjmeagherand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-25T19:04:17", "_ts": "1551121457180252", "description": "If you run I3CLSimClientModule in conditions where no photons are ever passed to the server you get a failed assertion:\n\n\n{{{\nINFO (I3CLSimClientModule): Flushing I3Tray.. (I3CLSimClientModule.cxx:934 in virtual void I3CLSimClientModule::Finish())\nAssertion failed: (bounds.first != bounds.second), function Thread, file /Users/kmeagher/icecube/simulation/trunk_kjm/src/clsim/private/clsim/I3CLSimClientModule.cxx, line 605.\nAbort trap: 6\n}}}\n", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2019-02-22T19:36:49", "component": "combo simulation", "summary": "[clsim] assert in client module fails with no photons", "priority": "normal", "keywords": "", "milestone": "Vernal Equinox 2019", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
1.0
[clsim] assert in client module fails with no photons (Trac #2241) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2241">https://code.icecube.wisc.edu/projects/icecube/ticket/2241</a>, reported by kjmeagherand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-25T19:04:17", "_ts": "1551121457180252", "description": "If you run I3CLSimClientModule in conditions where no photons are ever passed to the server you get a failed assertion:\n\n\n{{{\nINFO (I3CLSimClientModule): Flushing I3Tray.. (I3CLSimClientModule.cxx:934 in virtual void I3CLSimClientModule::Finish())\nAssertion failed: (bounds.first != bounds.second), function Thread, file /Users/kmeagher/icecube/simulation/trunk_kjm/src/clsim/private/clsim/I3CLSimClientModule.cxx, line 605.\nAbort trap: 6\n}}}\n", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2019-02-22T19:36:49", "component": "combo simulation", "summary": "[clsim] assert in client module fails with no photons", "priority": "normal", "keywords": "", "milestone": "Vernal Equinox 2019", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
defect
assert in client module fails with no photons trac migrated from json status closed changetime ts description if you run in conditions where no photons are ever passed to the server you get a failed assertion n n n ninfo flushing cxx in virtual void finish nassertion failed bounds first bounds second function thread file users kmeagher icecube simulation trunk kjm src clsim private clsim cxx line nabort trap n n reporter kjmeagher cc resolution fixed time component combo simulation summary assert in client module fails with no photons priority normal keywords milestone vernal equinox owner jvansanten type defect
1
170,456
20,883,722,007
IssuesEvent
2022-03-23 01:05:35
matrix-profile-foundation/matrixprofile-web
https://api.github.com/repos/matrix-profile-foundation/matrixprofile-web
opened
CVE-2021-33502 (High) detected in normalize-url-1.9.1.tgz, normalize-url-3.3.0.tgz
security vulnerability
## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-1.9.1.tgz</b>, <b>normalize-url-3.3.0.tgz</b></p></summary> <p> <details><summary><b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/mini-css-extract-plugin/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - mini-css-extract-plugin-0.8.2.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) </details> <details><summary><b>normalize-url-3.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - optimize-cssnano-plugin-1.0.6.tgz - cssnano-preset-default-4.0.7.tgz - postcss-normalize-url-4.0.1.tgz - :x: **normalize-url-3.3.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution: normalize-url - 4.5.1,5.3.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33502 (High) detected in normalize-url-1.9.1.tgz, normalize-url-3.3.0.tgz - ## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-1.9.1.tgz</b>, <b>normalize-url-3.3.0.tgz</b></p></summary> <p> <details><summary><b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/mini-css-extract-plugin/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - mini-css-extract-plugin-0.8.2.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) </details> <details><summary><b>normalize-url-3.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p> <p>Path to dependency file: /mpfrontend/package.json</p> <p>Path to vulnerable library: /mpfrontend/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - cli-service-4.1.2.tgz (Root Library) - optimize-cssnano-plugin-1.0.6.tgz - cssnano-preset-default-4.0.7.tgz - postcss-normalize-url-4.0.1.tgz - :x: **normalize-url-3.3.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution: normalize-url - 4.5.1,5.3.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in normalize url tgz normalize url tgz cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file mpfrontend package json path to vulnerable library mpfrontend node modules mini css extract plugin node modules normalize url package json dependency hierarchy cli service tgz root library mini css extract plugin tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file mpfrontend package json path to vulnerable library mpfrontend node modules normalize url package json dependency hierarchy cli service tgz root library optimize cssnano plugin tgz cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library found in base branch master vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url step up your open source security game with whitesource
0
5,408
2,610,187,065
IssuesEvent
2015-02-26 18:59:20
chrsmith/quchuseban
https://api.github.com/repos/chrsmith/quchuseban
opened
求助色斑怎么样才能淡化
auto-migrated Priority-Medium Type-Defect
``` 《摘要》 化妆品已经普遍到每个人,不管是学生或是年轻人,还是老�� �人,都希望自己的皮肤白嫩细腻。但是皮肤粗糙长斑困扰着� ��多女性。大部分女性都经历过这种事情,白皙的皮肤莫名的 长斑,皮肤变的暗黄粗糙。只是一味的用美白祛斑的化妆品�� �不但皮肤没有得到改善,反而黄褐斑更重,皮肤更粗糙发黑� ��不化妆都不敢出门,全靠涂抹厚厚的化妆品遮盖,长此以往 造成脸上黄褐斑越来越严重,那么怎么样去除色斑怎么解决�� �。色斑怎么样才能淡化, 《客户案例》   林小姐 30岁<br>   没有色斑了,那里那个乐啊,看着现在白白嫩嫩的皮肤�� �我还真不敢相信还能拥有今天这样的美丽肌肤。<br>   想想以前,我喜欢化淡装,自认为随着年龄的增大,皮�� �变差了,为了让青春永驻容颜不老,经常更换不同品牌的化� ��品,听信了某个化妆品可以增白和改变肤质,用过一段时间 后,结果不但没改变,反面让我的面部出现了大量的色斑。�� �的我的脸一团糟。色斑聚集在脸颊两边,特别粗糙,还经常� ��皮,还有点发红,不过不是红血丝,很容易过敏。现在美白 不成又要祛斑,真是让人心烦啊。祛斑产品用了不少,激光�� �做过,但脸上的斑点就是不能彻底祛掉,还多了些黑印。这� ��症状持续了两年多的时间,如果不是遇到「黛芙薇尔精华液 」,我还真不知道要怎么祛除这可恶的斑点了。<br>   使用「黛芙薇尔精华液」半个多月后,我发现脸上的色�� �颜色就淡了许多,一个月后,斑块明显消褪了不少,皮肤也� ��慢变得白皙起来,脸色也变得红润有光泽了,精神状况也得 到明显的改善,两个周期服完后,简直像变了一个人似的,�� �肤变白了,而且脸上的色斑少了很多,就剩下鼻梁上面还有� ��些很巩固的斑点,看样子还非得再使用一个周期才能把这“ 巩固分子”彻底的祛除了,于是我又订购了一个周期的产品�� �行巩固治疗。<br>   使用完之后,我脸上的色斑终于被我彻底搞定了,而且�� �肤也得到了很好的改善,变得白皙光滑了,看着现在的自己� ��感觉还年轻了不少呢。现在,我一般就用一些无香精,无化 学成分有洗面奶,保养我的皮肤,现在再也不去用什么化妆�� �了,脸上皮肤现在没有斑点,没有痘痘,再加上细腻嫩滑,� ��起来摸起来都不错。 阅读了色斑怎么样才能淡化,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 色斑怎么样才能淡化,同时为您分享祛斑小方法 每天喝一杯西红柿汁或常吃西红柿,对防止祛斑有较好的作�� �。西红柿中含有丰富的谷胱甘肽,谷胱甘肽可抑制黑色素,� ��而使沉着的色素减退或消失。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:05
1.0
求助色斑怎么样才能淡化 - ``` 《摘要》 化妆品已经普遍到每个人,不管是学生或是年轻人,还是老�� �人,都希望自己的皮肤白嫩细腻。但是皮肤粗糙长斑困扰着� ��多女性。大部分女性都经历过这种事情,白皙的皮肤莫名的 长斑,皮肤变的暗黄粗糙。只是一味的用美白祛斑的化妆品�� �不但皮肤没有得到改善,反而黄褐斑更重,皮肤更粗糙发黑� ��不化妆都不敢出门,全靠涂抹厚厚的化妆品遮盖,长此以往 造成脸上黄褐斑越来越严重,那么怎么样去除色斑怎么解决�� �。色斑怎么样才能淡化, 《客户案例》   林小姐 30岁<br>   没有色斑了,那里那个乐啊,看着现在白白嫩嫩的皮肤�� �我还真不敢相信还能拥有今天这样的美丽肌肤。<br>   想想以前,我喜欢化淡装,自认为随着年龄的增大,皮�� �变差了,为了让青春永驻容颜不老,经常更换不同品牌的化� ��品,听信了某个化妆品可以增白和改变肤质,用过一段时间 后,结果不但没改变,反面让我的面部出现了大量的色斑。�� �的我的脸一团糟。色斑聚集在脸颊两边,特别粗糙,还经常� ��皮,还有点发红,不过不是红血丝,很容易过敏。现在美白 不成又要祛斑,真是让人心烦啊。祛斑产品用了不少,激光�� �做过,但脸上的斑点就是不能彻底祛掉,还多了些黑印。这� ��症状持续了两年多的时间,如果不是遇到「黛芙薇尔精华液 」,我还真不知道要怎么祛除这可恶的斑点了。<br>   使用「黛芙薇尔精华液」半个多月后,我发现脸上的色�� �颜色就淡了许多,一个月后,斑块明显消褪了不少,皮肤也� ��慢变得白皙起来,脸色也变得红润有光泽了,精神状况也得 到明显的改善,两个周期服完后,简直像变了一个人似的,�� �肤变白了,而且脸上的色斑少了很多,就剩下鼻梁上面还有� ��些很巩固的斑点,看样子还非得再使用一个周期才能把这“ 巩固分子”彻底的祛除了,于是我又订购了一个周期的产品�� �行巩固治疗。<br>   使用完之后,我脸上的色斑终于被我彻底搞定了,而且�� �肤也得到了很好的改善,变得白皙光滑了,看着现在的自己� ��感觉还年轻了不少呢。现在,我一般就用一些无香精,无化 学成分有洗面奶,保养我的皮肤,现在再也不去用什么化妆�� �了,脸上皮肤现在没有斑点,没有痘痘,再加上细腻嫩滑,� ��起来摸起来都不错。 阅读了色斑怎么样才能淡化,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 色斑怎么样才能淡化,同时为您分享祛斑小方法 每天喝一杯西红柿汁或常吃西红柿,对防止祛斑有较好的作�� �。西红柿中含有丰富的谷胱甘肽,谷胱甘肽可抑制黑色素,� ��而使沉着的色素减退或消失。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:05
defect
求助色斑怎么样才能淡化 《摘要》 化妆品已经普遍到每个人,不管是学生或是年轻人,还是老�� �人,都希望自己的皮肤白嫩细腻。但是皮肤粗糙长斑困扰着� ��多女性。大部分女性都经历过这种事情,白皙的皮肤莫名的 长斑,皮肤变的暗黄粗糙。只是一味的用美白祛斑的化妆品�� �不但皮肤没有得到改善,反而黄褐斑更重,皮肤更粗糙发黑� ��不化妆都不敢出门,全靠涂抹厚厚的化妆品遮盖,长此以往 造成脸上黄褐斑越来越严重,那么怎么样去除色斑怎么解决�� �。色斑怎么样才能淡化, 《客户案例》   林小姐   没有色斑了,那里那个乐啊,看着现在白白嫩嫩的皮肤�� �我还真不敢相信还能拥有今天这样的美丽肌肤。   想想以前,我喜欢化淡装,自认为随着年龄的增大,皮�� �变差了,为了让青春永驻容颜不老,经常更换不同品牌的化� ��品,听信了某个化妆品可以增白和改变肤质,用过一段时间 后,结果不但没改变,反面让我的面部出现了大量的色斑。�� �的我的脸一团糟。色斑聚集在脸颊两边,特别粗糙,还经常� ��皮,还有点发红,不过不是红血丝,很容易过敏。现在美白 不成又要祛斑,真是让人心烦啊。祛斑产品用了不少,激光�� �做过,但脸上的斑点就是不能彻底祛掉,还多了些黑印。这� ��症状持续了两年多的时间,如果不是遇到「黛芙薇尔精华液 」,我还真不知道要怎么祛除这可恶的斑点了。   使用「黛芙薇尔精华液」半个多月后,我发现脸上的色�� �颜色就淡了许多,一个月后,斑块明显消褪了不少,皮肤也� ��慢变得白皙起来,脸色也变得红润有光泽了,精神状况也得 到明显的改善,两个周期服完后,简直像变了一个人似的,�� �肤变白了,而且脸上的色斑少了很多,就剩下鼻梁上面还有� ��些很巩固的斑点,看样子还非得再使用一个周期才能把这“ 巩固分子”彻底的祛除了,于是我又订购了一个周期的产品�� �行巩固治疗。   使用完之后,我脸上的色斑终于被我彻底搞定了,而且�� �肤也得到了很好的改善,变得白皙光滑了,看着现在的自己� ��感觉还年轻了不少呢。现在,我一般就用一些无香精,无化 学成分有洗面奶,保养我的皮肤,现在再也不去用什么化妆�� �了,脸上皮肤现在没有斑点,没有痘痘,再加上细腻嫩滑,� ��起来摸起来都不错。 阅读了色斑怎么样才能淡化,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》    黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗   答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来    ,服用黛芙薇尔美白,会伤身体吗 有副作用吗   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖    ,去除黄褐斑之后,会反弹吗   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗    ,你们的价格有点贵,能不能便宜一点   答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗    ,我适合用黛芙薇尔精华液吗   答:黛芙薇尔适用人群:    、生理紊乱引起的黄褐斑人群    、生育引起的妊娠斑人群    、年纪增长引起的老年斑人群    、化妆品色素沉积、辐射斑人群    、长期日照引起的日晒斑人群    、肌肤暗淡急需美白的人群 《祛斑小方法》 色斑怎么样才能淡化,同时为您分享祛斑小方法 每天喝一杯西红柿汁或常吃西红柿,对防止祛斑有较好的作�� �。西红柿中含有丰富的谷胱甘肽,谷胱甘肽可抑制黑色素,� ��而使沉着的色素减退或消失。 original issue reported on code google com by additive gmail com on jul at
1
332,961
24,356,857,189
IssuesEvent
2022-10-03 08:15:48
nspcc-dev/neofs-contract
https://api.github.com/repos/nspcc-dev/neofs-contract
opened
`netmap`: Package docs are not relevant
bug documentation netmap triage
Docs placed in `doc.go` are not relevant. We need to fix them, and especially notification structures. See @AnnaShaleva comments in #276. As an extra task we need to check out all other contracts.
1.0
`netmap`: Package docs are not relevant - Docs placed in `doc.go` are not relevant. We need to fix them, and especially notification structures. See @AnnaShaleva comments in #276. As an extra task we need to check out all other contracts.
non_defect
netmap package docs are not relevant docs placed in doc go are not relevant we need to fix them and especially notification structures see annashaleva comments in as an extra task we need to check out all other contracts
0
162,458
6,153,732,061
IssuesEvent
2017-06-28 10:44:07
BinPar/PRM
https://api.github.com/repos/BinPar/PRM
closed
PASAR TIPO "CONGRESO" A TIPO "PROFESIONAL"
Priority: Medium
Confirmado con el resto de filiales. Los contactos con Tipo Congreso se pueden pasar a Profesional y eliminar CONGRESO como Tipo de contacto ![image](https://cloud.githubusercontent.com/assets/22589031/25815991/edc2adbc-3422-11e7-9779-86620666fad3.png) Contactos filiales: salvo 1 contacto de Colombia, que era Docente, el resto son Profesionales. Contactos España: pasan todos a Profesionales @CristianBinpar @minigoBinpar
1.0
PASAR TIPO "CONGRESO" A TIPO "PROFESIONAL" - Confirmado con el resto de filiales. Los contactos con Tipo Congreso se pueden pasar a Profesional y eliminar CONGRESO como Tipo de contacto ![image](https://cloud.githubusercontent.com/assets/22589031/25815991/edc2adbc-3422-11e7-9779-86620666fad3.png) Contactos filiales: salvo 1 contacto de Colombia, que era Docente, el resto son Profesionales. Contactos España: pasan todos a Profesionales @CristianBinpar @minigoBinpar
non_defect
pasar tipo congreso a tipo profesional confirmado con el resto de filiales los contactos con tipo congreso se pueden pasar a profesional y eliminar congreso como tipo de contacto contactos filiales salvo contacto de colombia que era docente el resto son profesionales contactos españa pasan todos a profesionales cristianbinpar minigobinpar
0
66,514
20,254,202,902
IssuesEvent
2022-02-14 21:09:45
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
zfs recv fails silently when specifying -x encroot, regression against 2.1.2
Type: Defect
### System information Type | Version/Name --- | --- Distribution Name | Debian Distribution Version | Bullseye Kernel Version | Linux tarta 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux OpenZFS Version | 9f734e81f42b929920428b824659405e8710819f (current HEAD), against zfs-2.1.2-1.1 (this is also the module version) ### Describe the problem you're observing 2.1.2: ``` # zfs send -RLc dataset@snapshot | zfs recv -x encryption -x encryptionroot -x keylocation -x keyformat -ev tarta-zoot cannot receive: invalid property 'encryptionroot' ``` 9f734e81f42b929920428b824659405e8710819f: ``` # zfs send -RLc dataset@snapshot | cmd/zfs/zfs recv -x encryption -x encryptionroot -x keylocation -x keyformat -ev tarta-zoot ``` (nothing happens, $? nonzero)
1.0
zfs recv fails silently when specifying -x encroot, regression against 2.1.2 - ### System information Type | Version/Name --- | --- Distribution Name | Debian Distribution Version | Bullseye Kernel Version | Linux tarta 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux OpenZFS Version | 9f734e81f42b929920428b824659405e8710819f (current HEAD), against zfs-2.1.2-1.1 (this is also the module version) ### Describe the problem you're observing 2.1.2: ``` # zfs send -RLc dataset@snapshot | zfs recv -x encryption -x encryptionroot -x keylocation -x keyformat -ev tarta-zoot cannot receive: invalid property 'encryptionroot' ``` 9f734e81f42b929920428b824659405e8710819f: ``` # zfs send -RLc dataset@snapshot | cmd/zfs/zfs recv -x encryption -x encryptionroot -x keylocation -x keyformat -ev tarta-zoot ``` (nothing happens, $? nonzero)
defect
zfs recv fails silently when specifying x encroot regression against system information type version name distribution name debian distribution version bullseye kernel version linux tarta smp debian gnu linux openzfs version current head against zfs this is also the module version describe the problem you re observing zfs send rlc dataset snapshot zfs recv x encryption x encryptionroot x keylocation x keyformat ev tarta zoot cannot receive invalid property encryptionroot zfs send rlc dataset snapshot cmd zfs zfs recv x encryption x encryptionroot x keylocation x keyformat ev tarta zoot nothing happens nonzero
1
168,660
14,169,314,389
IssuesEvent
2020-11-12 13:03:41
eclipse/iceoryx
https://api.github.com/repos/eclipse/iceoryx
closed
Update CONTRIBUTING.md for multiple copyright owner
CW47-20-I documentation
## Brief feature description Define how multiple copyright owner shall be added to a license header. ## Detailed information Since iceoryx is an open source project and multiple entities can contribute code, the CONTRIBUTION.md shall show how their copyright is added to the source files
1.0
Update CONTRIBUTING.md for multiple copyright owner - ## Brief feature description Define how multiple copyright owner shall be added to a license header. ## Detailed information Since iceoryx is an open source project and multiple entities can contribute code, the CONTRIBUTION.md shall show how their copyright is added to the source files
non_defect
update contributing md for multiple copyright owner brief feature description define how multiple copyright owner shall be added to a license header detailed information since iceoryx is an open source project and multiple entities can contribute code the contribution md shall show how their copyright is added to the source files
0
28,148
5,200,181,525
IssuesEvent
2017-01-23 23:00:20
jccastillo0007/eFacturaT
https://api.github.com/repos/jccastillo0007/eFacturaT
opened
Optibelt - Las notas de crédito emitidas, aparecen como facturas en el web
bug defect
Todas las notas de crédito que envía el super conector, aparecen como facturas en el web. Recordar que la nota de crédito es egreso y las facturas son ingreso. Para un formato personalizado de notas de crédito, ocupa el mismo PDF, pero renombrar a cfdi_notaCredito_OME111223UN4.jrxml Necesitamos que se guarden y reconozcan como notas de crédito, incluyendo las que ya existen ahora mismo.
1.0
Optibelt - Las notas de crédito emitidas, aparecen como facturas en el web - Todas las notas de crédito que envía el super conector, aparecen como facturas en el web. Recordar que la nota de crédito es egreso y las facturas son ingreso. Para un formato personalizado de notas de crédito, ocupa el mismo PDF, pero renombrar a cfdi_notaCredito_OME111223UN4.jrxml Necesitamos que se guarden y reconozcan como notas de crédito, incluyendo las que ya existen ahora mismo.
defect
optibelt las notas de crédito emitidas aparecen como facturas en el web todas las notas de crédito que envía el super conector aparecen como facturas en el web recordar que la nota de crédito es egreso y las facturas son ingreso para un formato personalizado de notas de crédito ocupa el mismo pdf pero renombrar a cfdi notacredito jrxml necesitamos que se guarden y reconozcan como notas de crédito incluyendo las que ya existen ahora mismo
1
189,393
15,187,283,325
IssuesEvent
2021-02-15 13:35:28
RespiraWorks/Ventilator
https://api.github.com/repos/RespiraWorks/Ventilator
closed
Document 0.3 build enclosure
Documentation Mechanical
**Folder/Page:** There should be a page for the beta build enclosure. Somewhere under beta, I guess. **Task:** Get this info from Randall and Tijani and Donna, possibly? some of this was discussed in one of the #eng- channels. There were discussions about what angle things should be bent at and how that informed selection of manufacturer. Any of this stuff that impacts knowing how to properly get it made should be somehow documented. What sort of machine is needed for cutting the acrylic stuff? Maybe links to some companies that are good candidates for getting it done? But fundamental criteria for selection would be better. **Subtasks:** * [ ] CAD files for sheet metal stuff * [ ] CAD files for acrylic stuff **Relevant files:** Probably exported from Fusion or whatever they are using.
1.0
Document 0.3 build enclosure - **Folder/Page:** There should be a page for the beta build enclosure. Somewhere under beta, I guess. **Task:** Get this info from Randall and Tijani and Donna, possibly? some of this was discussed in one of the #eng- channels. There were discussions about what angle things should be bent at and how that informed selection of manufacturer. Any of this stuff that impacts knowing how to properly get it made should be somehow documented. What sort of machine is needed for cutting the acrylic stuff? Maybe links to some companies that are good candidates for getting it done? But fundamental criteria for selection would be better. **Subtasks:** * [ ] CAD files for sheet metal stuff * [ ] CAD files for acrylic stuff **Relevant files:** Probably exported from Fusion or whatever they are using.
non_defect
document build enclosure folder page there should be a page for the beta build enclosure somewhere under beta i guess task get this info from randall and tijani and donna possibly some of this was discussed in one of the eng channels there were discussions about what angle things should be bent at and how that informed selection of manufacturer any of this stuff that impacts knowing how to properly get it made should be somehow documented what sort of machine is needed for cutting the acrylic stuff maybe links to some companies that are good candidates for getting it done but fundamental criteria for selection would be better subtasks cad files for sheet metal stuff cad files for acrylic stuff relevant files probably exported from fusion or whatever they are using
0
76,503
9,458,971,533
IssuesEvent
2019-04-17 07:16:20
wq/wq.app
https://api.github.com/repos/wq/wq.app
closed
wq/online Android Issue
enhancement needs concept design
When using wq/online on Android devices, the online.js module is sometimes offline, even when online. This results in items going into the outbox. It seems that navigator.onLine does not always work reliably. I recommend we make an AJAX request to determine if the status is online as a fail safe.
1.0
wq/online Android Issue - When using wq/online on Android devices, the online.js module is sometimes offline, even when online. This results in items going into the outbox. It seems that navigator.onLine does not always work reliably. I recommend we make an AJAX request to determine if the status is online as a fail safe.
non_defect
wq online android issue when using wq online on android devices the online js module is sometimes offline even when online this results in items going into the outbox it seems that navigator online does not always work reliably i recommend we make an ajax request to determine if the status is online as a fail safe
0
10,062
8,790,387,219
IssuesEvent
2018-12-21 08:52:48
BlueBrain/nexus
https://api.github.com/repos/BlueBrain/nexus
opened
Add serialization of events into the primary store
admin services
- `Encoder[ProjectEvent]` - `Decoder[ProjectEvent]` - `Encoder[OrganizationEvent]` - `Decoder[OrganizationEvent]`
1.0
Add serialization of events into the primary store - - `Encoder[ProjectEvent]` - `Decoder[ProjectEvent]` - `Encoder[OrganizationEvent]` - `Decoder[OrganizationEvent]`
non_defect
add serialization of events into the primary store encoder decoder encoder decoder
0
9,646
8,684,719,531
IssuesEvent
2018-12-03 03:54:34
ssube/isolex
https://api.github.com/repos/ssube/isolex
opened
auth controller permissions
service/controller status/planned type/feature
### Summary `User`s belong to `Group`s, which have `Role`s, which are granted permissions (or scopes). The `AuthController` should be able to compile the list of applicable permissions for a particular `User` and check to make sure a list of permissions are all present. ### Scope - [ ] permission - [ ] compile (from user, roles, etc) - [ ] check - [ ] user - [ ] update ### Use Case The k8s controllers should require permissions to scale or update resources. The auth controller should require permissions to edit users, issue tokens, etc. ### Issues Need to resolve the full set of roles that apply to a user and compile their permissions in a semi-performant way. Maybe look at TypeORM caching (redis?). ### Details Use [shiro-trie](https://www.npmjs.com/package/shiro-trie) for permission tests.
1.0
auth controller permissions - ### Summary `User`s belong to `Group`s, which have `Role`s, which are granted permissions (or scopes). The `AuthController` should be able to compile the list of applicable permissions for a particular `User` and check to make sure a list of permissions are all present. ### Scope - [ ] permission - [ ] compile (from user, roles, etc) - [ ] check - [ ] user - [ ] update ### Use Case The k8s controllers should require permissions to scale or update resources. The auth controller should require permissions to edit users, issue tokens, etc. ### Issues Need to resolve the full set of roles that apply to a user and compile their permissions in a semi-performant way. Maybe look at TypeORM caching (redis?). ### Details Use [shiro-trie](https://www.npmjs.com/package/shiro-trie) for permission tests.
non_defect
auth controller permissions summary user s belong to group s which have role s which are granted permissions or scopes the authcontroller should be able to compile the list of applicable permissions for a particular user and check to make sure a list of permissions are all present scope permission compile from user roles etc check user update use case the controllers should require permissions to scale or update resources the auth controller should require permissions to edit users issue tokens etc issues need to resolve the full set of roles that apply to a user and compile their permissions in a semi performant way maybe look at typeorm caching redis details use for permission tests
0
40,360
9,967,276,459
IssuesEvent
2019-07-08 13:16:57
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
InterfaceKernelBase should inherit from PostprocessorInterface
C: MOOSE T: defect
## Bug Description <!--A clear and concise description of the problem (Note: A missing feature is not a bug).--> It is not possible to coupled a Postprocessor to an InterfaceKernel. ## Steps to Reproduce <!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)--> Try to call getPostprocessorValue in an InterfaceKernel, it doesn't work. ## Impact <!--Does this prevent you from getting your work done, or is it more of an annoyance?--> A user reported the problem on the mailing list, until this is fixed they should be able to inherit from PostprocessorInterface directly.
1.0
InterfaceKernelBase should inherit from PostprocessorInterface - ## Bug Description <!--A clear and concise description of the problem (Note: A missing feature is not a bug).--> It is not possible to coupled a Postprocessor to an InterfaceKernel. ## Steps to Reproduce <!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)--> Try to call getPostprocessorValue in an InterfaceKernel, it doesn't work. ## Impact <!--Does this prevent you from getting your work done, or is it more of an annoyance?--> A user reported the problem on the mailing list, until this is fixed they should be able to inherit from PostprocessorInterface directly.
defect
interfacekernelbase should inherit from postprocessorinterface bug description it is not possible to coupled a postprocessor to an interfacekernel steps to reproduce try to call getpostprocessorvalue in an interfacekernel it doesn t work impact a user reported the problem on the mailing list until this is fixed they should be able to inherit from postprocessorinterface directly
1
99,854
4,073,717,268
IssuesEvent
2016-05-28 00:17:13
ampproject/amphtml
https://api.github.com/repos/ampproject/amphtml
closed
Re-attached elements are being built again
Priority: High Related to: AMP Core
Follow up to #3354 We sometimes re-parent children to wrap them in a container (e.g. Carousel, Lightbox). But those children may have already been built. But since we fire another `attachedCallback`, we'll add them to the unbuilts again, and try to build again. It's cool to add them to the list again, but we should make sure they're really unbuilt before trying to #build them again. I believe we're already doing this in `ElementProto.build` method. ```javascript ElementProto.build = function() { // ... if (this.isBuilt()) { return; } // ... } ``` @jridgewell should this be enough? or were you suggesting avoid adding them to the pool all together?
1.0
Re-attached elements are being built again - Follow up to #3354 We sometimes re-parent children to wrap them in a container (e.g. Carousel, Lightbox). But those children may have already been built. But since we fire another `attachedCallback`, we'll add them to the unbuilts again, and try to build again. It's cool to add them to the list again, but we should make sure they're really unbuilt before trying to #build them again. I believe we're already doing this in `ElementProto.build` method. ```javascript ElementProto.build = function() { // ... if (this.isBuilt()) { return; } // ... } ``` @jridgewell should this be enough? or were you suggesting avoid adding them to the pool all together?
non_defect
re attached elements are being built again follow up to we sometimes re parent children to wrap them in a container e g carousel lightbox but those children may have already been built but since we fire another attachedcallback we ll add them to the unbuilts again and try to build again it s cool to add them to the list again but we should make sure they re really unbuilt before trying to build them again i believe we re already doing this in elementproto build method javascript elementproto build function if this isbuilt return jridgewell should this be enough or were you suggesting avoid adding them to the pool all together
0
23,650
3,851,864,947
IssuesEvent
2016-04-06 05:27:36
GPF/imame4all
https://api.github.com/repos/GPF/imame4all
closed
i can't compile(build).
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? - when i compile source that is iMAME4droid Reloaded 1.3.5, i wasn't complete to Compile (because of Occuring Error). - Error Message : $ make ...(compiling)... mkdir -p obj/droid-ios/mame64/tools Compiling src/mame/mamedriv.c... In file included from src/emu/emu.h:53:0, from src/mame/mamedriv.c:18: src/emu/emucore.h:18:18: fatal error: math.h: No such file or directory compilation terminated. makefile:1043: recipe for target `obj/droid-ios/mame64/mame/mamedriv.o' failed make: *** [obj/droid-ios/mame64/mame/mamedriv.o] Error 1 maybe, The make isn't reference 'sysroot'(include library directory).. so Copy, rename, Move, ... but, too. What version of the product are you using? On what operating system? - Cygwin(cross compile), windows7, android-15, ndk-android-r8e(toolchain-4.6, API-14) - help me! I need compile option(of makefile) and tip... thank U for your reading. ``` Original issue reported on code.google.com by `blackfie...@gmail.com` on 3 Aug 2013 at 4:04
1.0
i can't compile(build). - ``` What steps will reproduce the problem? - when i compile source that is iMAME4droid Reloaded 1.3.5, i wasn't complete to Compile (because of Occuring Error). - Error Message : $ make ...(compiling)... mkdir -p obj/droid-ios/mame64/tools Compiling src/mame/mamedriv.c... In file included from src/emu/emu.h:53:0, from src/mame/mamedriv.c:18: src/emu/emucore.h:18:18: fatal error: math.h: No such file or directory compilation terminated. makefile:1043: recipe for target `obj/droid-ios/mame64/mame/mamedriv.o' failed make: *** [obj/droid-ios/mame64/mame/mamedriv.o] Error 1 maybe, The make isn't reference 'sysroot'(include library directory).. so Copy, rename, Move, ... but, too. What version of the product are you using? On what operating system? - Cygwin(cross compile), windows7, android-15, ndk-android-r8e(toolchain-4.6, API-14) - help me! I need compile option(of makefile) and tip... thank U for your reading. ``` Original issue reported on code.google.com by `blackfie...@gmail.com` on 3 Aug 2013 at 4:04
defect
i can t compile build what steps will reproduce the problem when i compile source that is reloaded i wasn t complete to compile because of occuring error error message make compiling mkdir p obj droid ios tools compiling src mame mamedriv c in file included from src emu emu h from src mame mamedriv c src emu emucore h fatal error math h no such file or directory compilation terminated makefile recipe for target obj droid ios mame mamedriv o failed make error maybe the make isn t reference sysroot include library directory so copy rename move but too what version of the product are you using on what operating system cygwin cross compile android ndk android toolchain api help me i need compile option of makefile and tip thank u for your reading original issue reported on code google com by blackfie gmail com on aug at
1
62,605
17,088,553,311
IssuesEvent
2021-07-08 14:39:53
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
The Bug Report issue template is very Linux specific
Status: Triage Needed Type: Defect
The default issue template includes this section of template: ``` ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Linux Kernel | Architecture | ZFS Version | SPL Version | <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ``` This is great for Linux users, but not so applicable for other OSes. For example, in #12337 I put in some information about a Joyent machine.
1.0
The Bug Report issue template is very Linux specific - The default issue template includes this section of template: ``` ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Linux Kernel | Architecture | ZFS Version | SPL Version | <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ``` This is great for Linux users, but not so applicable for other OSes. For example, in #12337 I put in some information about a Joyent machine.
defect
the bug report issue template is very linux specific the default issue template includes this section of template system information type version name distribution name distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version this is great for linux users but not so applicable for other oses for example in i put in some information about a joyent machine
1
66,677
20,512,946,165
IssuesEvent
2022-03-01 08:51:37
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
sql Service execute, SqlResult set iteration + sql predicate map.values hang after Split brain. in versions 5.0, 5.1
Type: Defect Team: SQL
test http://jenkins.hazelcast.com/view/split/job/split-sql/ /disk1/workspace/split-sql/5.2-SNAPSHOT/2022_02_25-12_07_24/split-sql hangs at hzcmd.map.sql.predicate.PersonIdRange.timeStep(PersonIdRange.java:23) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:19) test http://jenkins.hazelcast.com/view/split/job/split-sql-service/ /disk1/workspace/split-sql-service/5.0/2022_02_25-12_57_37/split-sql-service hangs at hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:19) during the test sql queries are run from members and clients, and the cluster is split and then healed multiple stack traces are taken and are printed in the logs of all members / clients, while the calls hang. using version 4.0 the above test pass. also the stable cluster version of these test pass. passing sh hz-bench-run hz/split/split-sql 5 m6g.xlarge ami-05d1de0f40492c232 4.2
1.0
sql Service execute, SqlResult set iteration + sql predicate map.values hang after Split brain. in versions 5.0, 5.1 - test http://jenkins.hazelcast.com/view/split/job/split-sql/ /disk1/workspace/split-sql/5.2-SNAPSHOT/2022_02_25-12_07_24/split-sql hangs at hzcmd.map.sql.predicate.PersonIdRange.timeStep(PersonIdRange.java:23) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:19) test http://jenkins.hazelcast.com/view/split/job/split-sql-service/ /disk1/workspace/split-sql-service/5.0/2022_02_25-12_57_37/split-sql-service hangs at hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16) hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:19) during the test sql queries are run from members and clients, and the cluster is split and then healed multiple stack traces are taken and are printed in the logs of all members / clients, while the calls hang. using version 4.0 the above test pass. also the stable cluster version of these test pass. passing sh hz-bench-run hz/split/split-sql 5 m6g.xlarge ami-05d1de0f40492c232 4.2
defect
sql service execute sqlresult set iteration sql predicate map values hang after split brain in versions test workspace split sql snapshot split sql hangs at hzcmd map sql predicate personidrange timestep personidrange java hzcmd map sql service personidrange timestep personidrange java hzcmd map sql service personidrange timestep personidrange java test workspace split sql service split sql service hangs at hzcmd map sql service personidrange timestep personidrange java hzcmd map sql service personidrange timestep personidrange java during the test sql queries are run from members and clients and the cluster is split and then healed multiple stack traces are taken and are printed in the logs of all members clients while the calls hang using version the above test pass also the stable cluster version of these test pass passing sh hz bench run hz split split sql xlarge ami
1
52,790
3,029,541,376
IssuesEvent
2015-08-04 13:14:12
thesgc/chembiohub_helpdesk
https://api.github.com/repos/thesgc/chembiohub_helpdesk
opened
just my opinion - edit mode signalling could be more obvious in search results page. I appreciate t
app: ChemReg name: Karen priority: Low status: New
just my opinion - edit mode signalling could be more obvious in search results page. I appreciate the button is greyed, and each cell has a pencil icon but my inclination would be to perhaps make the button a different colour or/and add text to hammer home the message.
1.0
just my opinion - edit mode signalling could be more obvious in search results page. I appreciate t - just my opinion - edit mode signalling could be more obvious in search results page. I appreciate the button is greyed, and each cell has a pencil icon but my inclination would be to perhaps make the button a different colour or/and add text to hammer home the message.
non_defect
just my opinion edit mode signalling could be more obvious in search results page i appreciate t just my opinion edit mode signalling could be more obvious in search results page i appreciate the button is greyed and each cell has a pencil icon but my inclination would be to perhaps make the button a different colour or and add text to hammer home the message
0
72,520
24,163,145,017
IssuesEvent
2022-09-22 13:13:28
trimble-oss/dba-dash
https://api.github.com/repos/trimble-oss/dba-dash
closed
Arithmetic overflow error converting float to data type numeric
defect DBA Dash Agent Completed
Hi David, I got to know about the DBADash some days ago and after some testing and evaluations I got it installed on start collecting data. I have found some smaller issues in the application, among them this one shown in the image. This happens only on one of my SQL instances (SQL 2017 Standard Edition (64-bit) RTM CU29). ![math_overflow](https://user-images.githubusercontent.com/40255900/191668346-0765932d-0fb4-4f92-aab2-14460539d89e.png)
1.0
Arithmetic overflow error converting float to data type numeric - Hi David, I got to know about the DBADash some days ago and after some testing and evaluations I got it installed on start collecting data. I have found some smaller issues in the application, among them this one shown in the image. This happens only on one of my SQL instances (SQL 2017 Standard Edition (64-bit) RTM CU29). ![math_overflow](https://user-images.githubusercontent.com/40255900/191668346-0765932d-0fb4-4f92-aab2-14460539d89e.png)
defect
arithmetic overflow error converting float to data type numeric hi david i got to know about the dbadash some days ago and after some testing and evaluations i got it installed on start collecting data i have found some smaller issues in the application among them this one shown in the image this happens only on one of my sql instances sql standard edition bit rtm
1
30,975
6,383,587,467
IssuesEvent
2017-08-03 00:54:25
prettydiff/prettydiff
https://api.github.com/repos/prettydiff/prettydiff
closed
output bad code
Defect Parsing Pending Release
I tested minify below code. but output is bad code. `if(a)for(;a;);else b` -> `if(a){for(;a;){};else{b}}` `do do do;while(0);while(0);while(0)` -> `do{do{do{};while(0)}}while(0);while(0){`
1.0
output bad code - I tested minify below code. but output is bad code. `if(a)for(;a;);else b` -> `if(a){for(;a;){};else{b}}` `do do do;while(0);while(0);while(0)` -> `do{do{do{};while(0)}}while(0);while(0){`
defect
output bad code i tested minify below code but output is bad code if a for a else b if a for a else b do do do while while while do do do while while while
1
284,066
30,913,588,495
IssuesEvent
2023-08-05 02:19:34
panasalap/linux-4.19.72_Fix
https://api.github.com/repos/panasalap/linux-4.19.72_Fix
reopened
CVE-2020-29371 (Low) detected in linux-yoctov5.4.51
Mend: dependency security vulnerability
## CVE-2020-29371 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/fc232d9ef12e2320ea3e8cb3de916a34aad68b6a">fc232d9ef12e2320ea3e8cb3de916a34aad68b6a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd. <p>Publish Date: 2020-11-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-29371>CVE-2020-29371</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371</a></p> <p>Release Date: 2020-11-28</p> <p>Fix Resolution: v5.9-rc2,v5.8.4,v5.7.18,v5.4.61</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-29371 (Low) detected in linux-yoctov5.4.51 - ## CVE-2020-29371 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/fc232d9ef12e2320ea3e8cb3de916a34aad68b6a">fc232d9ef12e2320ea3e8cb3de916a34aad68b6a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/romfs/storage.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd. <p>Publish Date: 2020-11-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-29371>CVE-2020-29371</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371</a></p> <p>Release Date: 2020-11-28</p> <p>Fix Resolution: v5.9-rc2,v5.8.4,v5.7.18,v5.4.61</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve low detected in linux cve low severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files fs romfs storage c fs romfs storage c fs romfs storage c vulnerability details an issue was discovered in romfs dev read in fs romfs storage c in the linux kernel before uninitialized memory leaks to userspace aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
4,770
2,610,155,483
IssuesEvent
2015-02-26 18:49:25
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Music
auto-migrated Priority-Medium Type-Defect
``` increase new music volume ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:22
1.0
Music - ``` increase new music volume ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:22
defect
music increase new music volume original issue reported on code google com by gmail com on jan at
1
23,155
3,771,338,787
IssuesEvent
2016-03-16 17:17:37
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Emit issue when using += or -= operator on dictionary values
defect
Related to forum post: http://forums.bridge.net/forum/bridge-net-pro/bugs/1783 Live Bridge sample: http://live.bridge.net/#82f9b2ca69a1505af3f9 ### Expected not sure ### Actual ```javascript dict.set(0, +1); ``` ### Steps To Reproduce ```csharp public class App { [Ready] public static void Main() { var dict = new Dictionary<int, int>(); dict.Add(0, 5); dict[0] += 1; Global.alert(dict[0]); } } ```
1.0
Emit issue when using += or -= operator on dictionary values - Related to forum post: http://forums.bridge.net/forum/bridge-net-pro/bugs/1783 Live Bridge sample: http://live.bridge.net/#82f9b2ca69a1505af3f9 ### Expected not sure ### Actual ```javascript dict.set(0, +1); ``` ### Steps To Reproduce ```csharp public class App { [Ready] public static void Main() { var dict = new Dictionary<int, int>(); dict.Add(0, 5); dict[0] += 1; Global.alert(dict[0]); } } ```
defect
emit issue when using or operator on dictionary values related to forum post live bridge sample expected not sure actual javascript dict set steps to reproduce csharp public class app public static void main var dict new dictionary dict add dict global alert dict
1
14,062
2,789,880,342
IssuesEvent
2015-05-08 22:07:53
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
closed
BUG: Problem when there are multiple chart in a page
Priority-Medium Type-Defect
Original [issue 405](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=405) created by orwant on 2010-09-08T11:36:14.000Z: I have one line char and one motion chart in a same page, and it use to work perfectly. But today, only one chart is shown at a time. In Chrome, first chart is display, but second chart is not displayed. In IE, second chart is displayed, but the first chart is not displayed. It is the new problem that started to happen from today (as far as I know). It seems some new code in gogle visualization had introduced this problem. Anyone had experienced similar problem?? <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> LineChart &amp; MotionChart <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> NO <b>What operating system and browser are you using?</b> Windows7 Chrome Windows7 IE <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
BUG: Problem when there are multiple chart in a page - Original [issue 405](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=405) created by orwant on 2010-09-08T11:36:14.000Z: I have one line char and one motion chart in a same page, and it use to work perfectly. But today, only one chart is shown at a time. In Chrome, first chart is display, but second chart is not displayed. In IE, second chart is displayed, but the first chart is not displayed. It is the new problem that started to happen from today (as far as I know). It seems some new code in gogle visualization had introduced this problem. Anyone had experienced similar problem?? <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> LineChart &amp; MotionChart <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> NO <b>What operating system and browser are you using?</b> Windows7 Chrome Windows7 IE <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
defect
bug problem when there are multiple chart in a page original created by orwant on i have one line char and one motion chart in a same page and it use to work perfectly but today only one chart is shown at a time in chrome first chart is display but second chart is not displayed in ie second chart is displayed but the first chart is not displayed it is the new problem that started to happen from today as far as i know it seems some new code in gogle visualization had introduced this problem anyone had experienced similar problem what component is this issue related to piechart linechart datatable query etc linechart amp motionchart are you using the test environment version if you are not sure answer no no what operating system and browser are you using chrome ie for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
1
118,209
9,978,387,221
IssuesEvent
2019-07-09 19:45:03
zeroc-ice/ice
https://api.github.com/repos/zeroc-ice/ice
closed
IceSSL/configuration certificate verification failure on bionic64
icessl testsuite
Occurred on both bionic64 and bionic64arm with distribution testing. ``` *** [61/91] Running cpp/IceSSL/configuration tests *** [ running client/server test - 07/03/19 14:32:32 ] - Config: amd64 (/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration/build/x86_64-linux-gnu/shared/server --Ice.Warn.Connections=1 --Ice.PrintAdapterReady=1 --Ice.NullHandleAbort=1 --Ice.Default.Protocol=tcp --Ice.ThreadPool.Server.Size=1 --Ice.Default.Host=127.0.0.1 --Ice.IPv6=0 --Ice.ThreadPool.Server.SizeMax=3 --Test.BasePort=14100 --Ice.PrintStackTraces=1 --Ice.ThreadPool.Server.SizeWarn=0 "/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration" env={'LD_LIBRARY_PATH': '/usr/lib/x86_64-linux-gnu'}) (/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration/build/x86_64-linux-gnu/shared/client --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.IPv6=0 --Ice.PrintStackTraces=1 --Test.BasePort=14100 --Ice.NullHandleAbort=1 --Ice.Default.Protocol=tcp "/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration" env={'LD_LIBRARY_PATH': '/usr/lib/x86_64-linux-gnu'}) testing with PKCS12 certificates... testing manual initialization... ok testing certificate verification... ok testing certificate info... ok testing certificate chains... ok testing certificate extensions... ok testing custom certificate verifier... ok testing protocols... ok testing expired certificates... ok testing CA certificate directory... ok testing multiple CA certificates... ok testing password prompt... ok testing ciphers... src/IceSSL/OpenSSLTransceiverI.cpp:314: ::Ice::SecurityException: security exception: IceSSL: certificate verification failed: self signed certificate in certificate chain failed! test/IceSSL/configuration/AllTests.cpp:2678: assertion `false' failed ```
1.0
IceSSL/configuration certificate verification failure on bionic64 - Occurred on both bionic64 and bionic64arm with distribution testing. ``` *** [61/91] Running cpp/IceSSL/configuration tests *** [ running client/server test - 07/03/19 14:32:32 ] - Config: amd64 (/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration/build/x86_64-linux-gnu/shared/server --Ice.Warn.Connections=1 --Ice.PrintAdapterReady=1 --Ice.NullHandleAbort=1 --Ice.Default.Protocol=tcp --Ice.ThreadPool.Server.Size=1 --Ice.Default.Host=127.0.0.1 --Ice.IPv6=0 --Ice.ThreadPool.Server.SizeMax=3 --Test.BasePort=14100 --Ice.PrintStackTraces=1 --Ice.ThreadPool.Server.SizeWarn=0 "/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration" env={'LD_LIBRARY_PATH': '/usr/lib/x86_64-linux-gnu'}) (/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration/build/x86_64-linux-gnu/shared/client --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.IPv6=0 --Ice.PrintStackTraces=1 --Test.BasePort=14100 --Ice.NullHandleAbort=1 --Ice.Default.Protocol=tcp "/home/vagrant/workspace/ice-dist/3.7/dist-utils/build/ice/builds/ice-g++-default/cpp/test/IceSSL/configuration" env={'LD_LIBRARY_PATH': '/usr/lib/x86_64-linux-gnu'}) testing with PKCS12 certificates... testing manual initialization... ok testing certificate verification... ok testing certificate info... ok testing certificate chains... ok testing certificate extensions... ok testing custom certificate verifier... ok testing protocols... ok testing expired certificates... ok testing CA certificate directory... ok testing multiple CA certificates... ok testing password prompt... ok testing ciphers... src/IceSSL/OpenSSLTransceiverI.cpp:314: ::Ice::SecurityException: security exception: IceSSL: certificate verification failed: self signed certificate in certificate chain failed! test/IceSSL/configuration/AllTests.cpp:2678: assertion `false' failed ```
non_defect
icessl configuration certificate verification failure on occurred on both and with distribution testing running cpp icessl configuration tests config home vagrant workspace ice dist dist utils build ice builds ice g default cpp test icessl configuration build linux gnu shared server ice warn connections ice printadapterready ice nullhandleabort ice default protocol tcp ice threadpool server size ice default host ice ice threadpool server sizemax test baseport ice printstacktraces ice threadpool server sizewarn home vagrant workspace ice dist dist utils build ice builds ice g default cpp test icessl configuration env ld library path usr lib linux gnu home vagrant workspace ice dist dist utils build ice builds ice g default cpp test icessl configuration build linux gnu shared client ice default host ice warn connections ice ice printstacktraces test baseport ice nullhandleabort ice default protocol tcp home vagrant workspace ice dist dist utils build ice builds ice g default cpp test icessl configuration env ld library path usr lib linux gnu testing with certificates testing manual initialization ok testing certificate verification ok testing certificate info ok testing certificate chains ok testing certificate extensions ok testing custom certificate verifier ok testing protocols ok testing expired certificates ok testing ca certificate directory ok testing multiple ca certificates ok testing password prompt ok testing ciphers src icessl openssltransceiveri cpp ice securityexception security exception icessl certificate verification failed self signed certificate in certificate chain failed test icessl configuration alltests cpp assertion false failed
0
76,537
26,481,603,220
IssuesEvent
2023-01-17 15:03:57
OpenMS/OpenMS
https://api.github.com/repos/OpenMS/OpenMS
closed
compressed mzXML leads to NaNs on Windows [607]
defect major TOPPView
Submitted by witek96 on 2013-09-05 10:37:34 Plaese see attached image. On the left TOPPView 1.11 under windows on the right TOPPView 1.11 on Linux displaying exactly the same file. I do not attach the mzXML, but can provide it to the ticket taker. regards --- Commented by cbielow on 2013-09-05 11:28:37: So, your picture is trying to say that both the intensities and even the presence of peaks is different?! Or is it the axis labeling or what? The spectrum selection indicates that you might compare apples with oranges (#8 on Win, #9 on Linux), but it might just be a pre-selection. Impossible to tell just from the picture. So, can you please confirm that we see the exact same spectrum. If its just this spectrum, you can easily attach it, since its just a few KBs. --- Commented by witek96 on 2013-09-05 11:43:53: Hi Chris, I see you doubt. I doubted too but. Please find attached the 2D view of the same dataset on windows and linux. --- Commented by witek96 on 2013-09-05 11:54:04: It is not fun creating tickets attaching screenshots (especially having a limit of 250K). --- Commented by witek96 on 2013-09-05 11:58:13: I send you the mzXML file with our Cifex download. regards --- Commented by cbielow on 2013-09-05 14:17:46: I can confirm the screenshots (both 1D and 2D). The reason is that the data in the mzXML file is partially interpreted as 'nan' by our parser -- but only on Windows. Linux seems fine. This leads to this weird display behaviour. Workaround: convert this mzXML file using PWiz to mz(X)ML (both work) WITHOUT COMPRESSION, which OpenMS can then read. When converting the original mzXML to mzML using FileConverter, the nan's remain. Creating dta2D gives data like # SEC MZ INT 3000.06 nan 2.59712 3000.06 nan nan 3000.06 nan nan 3000.06 nan nan 3000.06 69.0713154294649 15.8147 So, the reason is that the spectra in the original mzXML are zlib compressed by PWiz and our Windows implementation cannot correctly uncompress. --- Commented by witek96 on 2013-09-05 15:42:26: The reason is that the data in the mzXML file is partially interpreted as 'nan' by our parser -- but only on Windows. Linux seems fine. This leads to this weird display behaviour. ??? are there nans in the data actually (? I am just checking with mzR and can't see any).
1.0
compressed mzXML leads to NaNs on Windows [607] - Submitted by witek96 on 2013-09-05 10:37:34 Plaese see attached image. On the left TOPPView 1.11 under windows on the right TOPPView 1.11 on Linux displaying exactly the same file. I do not attach the mzXML, but can provide it to the ticket taker. regards --- Commented by cbielow on 2013-09-05 11:28:37: So, your picture is trying to say that both the intensities and even the presence of peaks is different?! Or is it the axis labeling or what? The spectrum selection indicates that you might compare apples with oranges (#8 on Win, #9 on Linux), but it might just be a pre-selection. Impossible to tell just from the picture. So, can you please confirm that we see the exact same spectrum. If its just this spectrum, you can easily attach it, since its just a few KBs. --- Commented by witek96 on 2013-09-05 11:43:53: Hi Chris, I see you doubt. I doubted too but. Please find attached the 2D view of the same dataset on windows and linux. --- Commented by witek96 on 2013-09-05 11:54:04: It is not fun creating tickets attaching screenshots (especially having a limit of 250K). --- Commented by witek96 on 2013-09-05 11:58:13: I send you the mzXML file with our Cifex download. regards --- Commented by cbielow on 2013-09-05 14:17:46: I can confirm the screenshots (both 1D and 2D). The reason is that the data in the mzXML file is partially interpreted as 'nan' by our parser -- but only on Windows. Linux seems fine. This leads to this weird display behaviour. Workaround: convert this mzXML file using PWiz to mz(X)ML (both work) WITHOUT COMPRESSION, which OpenMS can then read. When converting the original mzXML to mzML using FileConverter, the nan's remain. Creating dta2D gives data like # SEC MZ INT 3000.06 nan 2.59712 3000.06 nan nan 3000.06 nan nan 3000.06 nan nan 3000.06 69.0713154294649 15.8147 So, the reason is that the spectra in the original mzXML are zlib compressed by PWiz and our Windows implementation cannot correctly uncompress. --- Commented by witek96 on 2013-09-05 15:42:26: The reason is that the data in the mzXML file is partially interpreted as 'nan' by our parser -- but only on Windows. Linux seems fine. This leads to this weird display behaviour. ??? are there nans in the data actually (? I am just checking with mzR and can't see any).
defect
compressed mzxml leads to nans on windows submitted by on plaese see attached image on the left toppview under windows on the right toppview on linux displaying exactly the same file i do not attach the mzxml but can provide it to the ticket taker regards commented by cbielow on so your picture is trying to say that both the intensities and even the presence of peaks is different or is it the axis labeling or what the spectrum selection indicates that you might compare apples with oranges on win on linux but it might just be a pre selection impossible to tell just from the picture so can you please confirm that we see the exact same spectrum if its just this spectrum you can easily attach it since its just a few kbs commented by on hi chris i see you doubt i doubted too but please find attached the view of the same dataset on windows and linux commented by on it is not fun creating tickets attaching screenshots especially having a limit of commented by on i send you the mzxml file with our cifex download regards commented by cbielow on i can confirm the screenshots both and the reason is that the data in the mzxml file is partially interpreted as nan by our parser but only on windows linux seems fine this leads to this weird display behaviour workaround convert this mzxml file using pwiz to mz x ml both work without compression which openms can then read when converting the original mzxml to mzml using fileconverter the nan s remain creating gives data like sec mz int nan nan nan nan nan nan nan so the reason is that the spectra in the original mzxml are zlib compressed by pwiz and our windows implementation cannot correctly uncompress commented by on the reason is that the data in the mzxml file is partially interpreted as nan by our parser but only on windows linux seems fine this leads to this weird display behaviour are there nans in the data actually i am just checking with mzr and can t see any
1
82,264
7,835,895,744
IssuesEvent
2018-06-17 12:55:46
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
System.Linq.Expressions tests disabled with [Theory(Skip = "870811")]
area-System.Linq.Expressions test bug
System.Linq.Expressions.Tests.LambdaTests.InvokeComputedLambda [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.CallOnCapturedInstance [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.NewExpressionwithMemberAssignInit [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.ArrayInitializedWithCapturedInstance [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.TestAndAlso [SKIP] This number doesn't mean anything.
1.0
System.Linq.Expressions tests disabled with [Theory(Skip = "870811")] - System.Linq.Expressions.Tests.LambdaTests.InvokeComputedLambda [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.CallOnCapturedInstance [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.NewExpressionwithMemberAssignInit [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.ArrayInitializedWithCapturedInstance [SKIP] System.Linq.Expressions.Tests.Compiler_Tests.TestAndAlso [SKIP] This number doesn't mean anything.
non_defect
system linq expressions tests disabled with system linq expressions tests lambdatests invokecomputedlambda system linq expressions tests compiler tests calloncapturedinstance system linq expressions tests compiler tests newexpressionwithmemberassigninit system linq expressions tests compiler tests arrayinitializedwithcapturedinstance system linq expressions tests compiler tests testandalso this number doesn t mean anything
0
348,965
24,928,317,803
IssuesEvent
2022-10-31 09:28:17
devriesewouter89/CoGhentToiletPaper
https://api.github.com/repos/devriesewouter89/CoGhentToiletPaper
opened
documentation: plans for suction table
documentation
create a dxf or pdf with routing form to create suction table
1.0
documentation: plans for suction table - create a dxf or pdf with routing form to create suction table
non_defect
documentation plans for suction table create a dxf or pdf with routing form to create suction table
0
32,253
15,299,385,954
IssuesEvent
2021-02-24 10:53:28
dynawo/dynawo
https://api.github.com/repos/dynawo/dynawo
closed
Extend the CPP restorative load component to add voltage limitations
DynaFlow Enhancement Models library Performance
In its current state, the restorative load component available in the network model doesn't include any voltage limitations: it means that the restoration is done whatever is the voltage value. A more realistic model will also implement and use minimal and maximal voltage limitations. Outside these values, the load will then behave as a pure alpha-beta load. Even if this model's improvement is more realistic, it should remain optional for applications not wanting to use it at first stage (DynaWaltz for example). It means that the values should remain optional in the parameters set and the associated g and z shouldn't be set and evaluated in case the limits are defined.
True
Extend the CPP restorative load component to add voltage limitations - In its current state, the restorative load component available in the network model doesn't include any voltage limitations: it means that the restoration is done whatever is the voltage value. A more realistic model will also implement and use minimal and maximal voltage limitations. Outside these values, the load will then behave as a pure alpha-beta load. Even if this model's improvement is more realistic, it should remain optional for applications not wanting to use it at first stage (DynaWaltz for example). It means that the values should remain optional in the parameters set and the associated g and z shouldn't be set and evaluated in case the limits are defined.
non_defect
extend the cpp restorative load component to add voltage limitations in its current state the restorative load component available in the network model doesn t include any voltage limitations it means that the restoration is done whatever is the voltage value a more realistic model will also implement and use minimal and maximal voltage limitations outside these values the load will then behave as a pure alpha beta load even if this model s improvement is more realistic it should remain optional for applications not wanting to use it at first stage dynawaltz for example it means that the values should remain optional in the parameters set and the associated g and z shouldn t be set and evaluated in case the limits are defined
0
801,697
28,498,912,796
IssuesEvent
2023-04-18 15:53:35
chaotic-aur/packages
https://api.github.com/repos/chaotic-aur/packages
closed
[Request] qlcplus
request:new-pkg priority:low
### Link to the package base(s) in the AUR http://aur.archlinux.org/packages/qlcplus ### Utility this package has for you Q Light Controller Plus - The open DMX lighting desk software for controlling professional lighting fixtures. ### Do you consider the package(s) to be useful for every Chaotic-AUR user? YES! ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [ ] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information _No response_
1.0
[Request] qlcplus - ### Link to the package base(s) in the AUR http://aur.archlinux.org/packages/qlcplus ### Utility this package has for you Q Light Controller Plus - The open DMX lighting desk software for controlling professional lighting fixtures. ### Do you consider the package(s) to be useful for every Chaotic-AUR user? YES! ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [ ] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information _No response_
non_defect
qlcplus link to the package base s in the aur utility this package has for you q light controller plus the open dmx lighting desk software for controlling professional lighting fixtures do you consider the package s to be useful for every chaotic aur user yes do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response
0
304,837
26,339,194,161
IssuesEvent
2023-01-10 16:24:51
internetarchive/wcdimportbot
https://api.github.com/repos/internetarchive/wcdimportbot
closed
as a developer I want a helper script that runs the test coverage and saves the report in TEST_COVERAGE.txt
backend testing
This should be done on every pull request as part of the preparation for merge
1.0
as a developer I want a helper script that runs the test coverage and saves the report in TEST_COVERAGE.txt - This should be done on every pull request as part of the preparation for merge
non_defect
as a developer i want a helper script that runs the test coverage and saves the report in test coverage txt this should be done on every pull request as part of the preparation for merge
0
239,769
26,232,071,141
IssuesEvent
2023-01-05 01:44:17
KDWSS/dd-trace-java
https://api.github.com/repos/KDWSS/dd-trace-java
opened
CVE-2022-3509 (High) detected in protobuf-java-3.11.4.jar
security vulnerability
## CVE-2022-3509 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.11.4.jar</b></p></summary> <p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.</p> <p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p> <p>Path to dependency file: /dd-java-agent/instrumentation/jdbc/jdbc.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.11.4/7ec0925cc3aef0335bbc7d57edfd42b0f86f8267/protobuf-java-3.11.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.11.4/7ec0925cc3aef0335bbc7d57edfd42b0f86f8267/protobuf-java-3.11.4.jar</p> <p> Dependency Hierarchy: - mysql-connector-java-8.0.23.jar (Root Library) - :x: **protobuf-java-3.11.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A parsing issue similar to CVE-2022-3171, but with textformat in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above. <p>Publish Date: 2022-12-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3509>CVE-2022-3509</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3509">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3509</a></p> <p>Release Date: 2022-12-12</p> <p>Fix Resolution (com.google.protobuf:protobuf-java): 3.16.3</p> <p>Direct dependency fix Resolution (mysql:mysql-connector-java): 8.0.29</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2022-3509 (High) detected in protobuf-java-3.11.4.jar - ## CVE-2022-3509 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.11.4.jar</b></p></summary> <p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.</p> <p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p> <p>Path to dependency file: /dd-java-agent/instrumentation/jdbc/jdbc.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.11.4/7ec0925cc3aef0335bbc7d57edfd42b0f86f8267/protobuf-java-3.11.4.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.11.4/7ec0925cc3aef0335bbc7d57edfd42b0f86f8267/protobuf-java-3.11.4.jar</p> <p> Dependency Hierarchy: - mysql-connector-java-8.0.23.jar (Root Library) - :x: **protobuf-java-3.11.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A parsing issue similar to CVE-2022-3171, but with textformat in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above. <p>Publish Date: 2022-12-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3509>CVE-2022-3509</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3509">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3509</a></p> <p>Release Date: 2022-12-12</p> <p>Fix Resolution (com.google.protobuf:protobuf-java): 3.16.3</p> <p>Direct dependency fix Resolution (mysql:mysql-connector-java): 8.0.29</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_defect
cve high detected in protobuf java jar cve high severity vulnerability vulnerable library protobuf java jar core protocol buffers library protocol buffers are a way of encoding structured data in an efficient yet extensible format library home page a href path to dependency file dd java agent instrumentation jdbc jdbc gradle path to vulnerable library home wss scanner gradle caches modules files com google protobuf protobuf java protobuf java jar home wss scanner gradle caches modules files com google protobuf protobuf java protobuf java jar dependency hierarchy mysql connector java jar root library x protobuf java jar vulnerable library found in head commit a href found in base branch master vulnerability details a parsing issue similar to cve but with textformat in protobuf java core and lite versions prior to and can lead to a denial of service attack inputs containing multiple instances of non repeated embedded messages with repeated or unknown fields causes objects to be converted back n forth between mutable and immutable forms resulting in potentially long garbage collection pauses we recommend updating to the versions mentioned above publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com google protobuf protobuf java direct dependency fix resolution mysql mysql connector java rescue worker helmet automatic remediation is available for this issue
0
713,463
24,528,878,021
IssuesEvent
2022-10-11 14:57:31
AY2223S1-CS2103T-T11-3/tp
https://api.github.com/repos/AY2223S1-CS2103T-T11-3/tp
closed
Marking of tasks not displayed in UI
type.Bug priority.High severity.High
Using the `marktask` command, the marked task is not immediately updated in the UI.
1.0
Marking of tasks not displayed in UI - Using the `marktask` command, the marked task is not immediately updated in the UI.
non_defect
marking of tasks not displayed in ui using the marktask command the marked task is not immediately updated in the ui
0
366,174
25,571,769,572
IssuesEvent
2022-11-30 18:17:20
ntpa/banking
https://api.github.com/repos/ntpa/banking
closed
Update README with specific PostgreSQL Installation options
documentation
Current Credentials in ```creds.hpp``` assume default server configuration(i.e. default port for PostgreSQL instance)
1.0
Update README with specific PostgreSQL Installation options - Current Credentials in ```creds.hpp``` assume default server configuration(i.e. default port for PostgreSQL instance)
non_defect
update readme with specific postgresql installation options current credentials in creds hpp assume default server configuration i e default port for postgresql instance
0
165,524
14,004,835,921
IssuesEvent
2020-10-28 17:36:58
gbyeon/DSP
https://api.github.com/repos/gbyeon/DSP
opened
Current version
documentation
- supports non-coupling quadratic constraints in second stage - can be solved as a deterministic problem or via dual decomposition using cplex
1.0
Current version - - supports non-coupling quadratic constraints in second stage - can be solved as a deterministic problem or via dual decomposition using cplex
non_defect
current version supports non coupling quadratic constraints in second stage can be solved as a deterministic problem or via dual decomposition using cplex
0
25,661
12,703,245,596
IssuesEvent
2020-06-22 21:51:16
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
HTTP2: optimize HPACK decoding.
area-System.Net.Http enhancement tenet-performance
Our HPACK decoder looks like it can be improved. We should write some benchmarks and see if any optimizations can be made.
True
HTTP2: optimize HPACK decoding. - Our HPACK decoder looks like it can be improved. We should write some benchmarks and see if any optimizations can be made.
non_defect
optimize hpack decoding our hpack decoder looks like it can be improved we should write some benchmarks and see if any optimizations can be made
0
203,897
23,192,022,104
IssuesEvent
2022-08-01 13:25:51
jgeraigery/Baragon
https://api.github.com/repos/jgeraigery/Baragon
opened
WS-2019-0379 (Medium) detected in commons-codec-1.10.jar
security vulnerability
## WS-2019-0379 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.10.jar</b></p></summary> <p>The Apache Commons Codec package contains simple encoder and decoders for various formats such as Base64 and Hexadecimal. In addition to these widely used encoders and decoders, the codec package also maintains a collection of phonetic encoding utilities.</p> <p>Path to dependency file: /BaragonAgentService/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar</p> <p> Dependency Hierarchy: - HorizonCore-0.1.2.jar (Root Library) - :x: **commons-codec-1.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Baragon/commit/6af46aa96e54f0bae338e980e47c42b98ceaa57d">6af46aa96e54f0bae338e980e47c42b98ceaa57d</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation. <p>Publish Date: 2019-05-20 <p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2019-05-20</p> <p>Fix Resolution: commons-codec:commons-codec:1.13</p> </p> </details> <p></p>
True
WS-2019-0379 (Medium) detected in commons-codec-1.10.jar - ## WS-2019-0379 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.10.jar</b></p></summary> <p>The Apache Commons Codec package contains simple encoder and decoders for various formats such as Base64 and Hexadecimal. In addition to these widely used encoders and decoders, the codec package also maintains a collection of phonetic encoding utilities.</p> <p>Path to dependency file: /BaragonAgentService/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar,/home/wss-scanner/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar</p> <p> Dependency Hierarchy: - HorizonCore-0.1.2.jar (Root Library) - :x: **commons-codec-1.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Baragon/commit/6af46aa96e54f0bae338e980e47c42b98ceaa57d">6af46aa96e54f0bae338e980e47c42b98ceaa57d</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation. <p>Publish Date: 2019-05-20 <p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2019-05-20</p> <p>Fix Resolution: commons-codec:commons-codec:1.13</p> </p> </details> <p></p>
non_defect
ws medium detected in commons codec jar ws medium severity vulnerability vulnerable library commons codec jar the apache commons codec package contains simple encoder and decoders for various formats such as and hexadecimal in addition to these widely used encoders and decoders the codec package also maintains a collection of phonetic encoding utilities path to dependency file baragonagentservice pom xml path to vulnerable library home wss scanner repository commons codec commons codec commons codec jar home wss scanner repository commons codec commons codec commons codec jar home wss scanner repository commons codec commons codec commons codec jar home wss scanner repository commons codec commons codec commons codec jar home wss scanner repository commons codec commons codec commons codec jar home wss scanner repository commons codec commons codec commons codec jar dependency hierarchy horizoncore jar root library x commons codec jar vulnerable library found in head commit a href found in base branch master vulnerability details apache commons codec before version “commons codec ” is vulnerable to information disclosure due to improper input validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution commons codec commons codec
0
39,610
9,562,036,790
IssuesEvent
2019-05-04 04:49:19
DependencyTrack/dependency-track
https://api.github.com/repos/DependencyTrack/dependency-track
closed
Auto-Created Projects With Empty Name
defect p2 pending release
### Issue Type: - [X] defect report - [ ] enhancement request ### Current Behavior: Having set up pipeline jobs that use autocreate with `projectName` and `projectversion`, I see over 100 projects succesfully created overnight in Dependency-Track server as the matching Jenkins pipeline jobs run. However, the implemenation bases `projectName` on Maven project.name - and two projects have been created in DT with blank names (each with a different version). The problem is that, without a name, DT server provides no link to the project and without a link there is: - No way to inspect the project in DT and work out what the source was (where one could fix the problem) - No way to delete the project via the UI- there's no knowledge of the uuid. I am sure that I can track things down by using REST to produce the project list (including uuid). However, I log this for completeness. ### Steps to Reproduce (if defect): Use dependency-track plugin to auto-create a project using: - `projectName `= null (I am assuming that the problem is when maven project.name does not exist). - `projectversion` = something valid Examine project in UI. ### Expected Behavior: **Either**: Dependency Track creates project with a dummy name, thus providing a link to the project. **Or**: Dependency-Track does not create the project and returns REST response 400. ### Environment: - Dependency-Track Version: 3.4.0 - Distribution: Executable WAR - dependency-track-plugin: 2.1.0 - cyclonedx-maven-plugin: 1.3.1
1.0
Auto-Created Projects With Empty Name - ### Issue Type: - [X] defect report - [ ] enhancement request ### Current Behavior: Having set up pipeline jobs that use autocreate with `projectName` and `projectversion`, I see over 100 projects succesfully created overnight in Dependency-Track server as the matching Jenkins pipeline jobs run. However, the implemenation bases `projectName` on Maven project.name - and two projects have been created in DT with blank names (each with a different version). The problem is that, without a name, DT server provides no link to the project and without a link there is: - No way to inspect the project in DT and work out what the source was (where one could fix the problem) - No way to delete the project via the UI- there's no knowledge of the uuid. I am sure that I can track things down by using REST to produce the project list (including uuid). However, I log this for completeness. ### Steps to Reproduce (if defect): Use dependency-track plugin to auto-create a project using: - `projectName `= null (I am assuming that the problem is when maven project.name does not exist). - `projectversion` = something valid Examine project in UI. ### Expected Behavior: **Either**: Dependency Track creates project with a dummy name, thus providing a link to the project. **Or**: Dependency-Track does not create the project and returns REST response 400. ### Environment: - Dependency-Track Version: 3.4.0 - Distribution: Executable WAR - dependency-track-plugin: 2.1.0 - cyclonedx-maven-plugin: 1.3.1
defect
auto created projects with empty name issue type defect report enhancement request current behavior having set up pipeline jobs that use autocreate with projectname and projectversion i see over projects succesfully created overnight in dependency track server as the matching jenkins pipeline jobs run however the implemenation bases projectname on maven project name and two projects have been created in dt with blank names each with a different version the problem is that without a name dt server provides no link to the project and without a link there is no way to inspect the project in dt and work out what the source was where one could fix the problem no way to delete the project via the ui there s no knowledge of the uuid i am sure that i can track things down by using rest to produce the project list including uuid however i log this for completeness steps to reproduce if defect use dependency track plugin to auto create a project using projectname null i am assuming that the problem is when maven project name does not exist projectversion something valid examine project in ui expected behavior either dependency track creates project with a dummy name thus providing a link to the project or dependency track does not create the project and returns rest response environment dependency track version distribution executable war dependency track plugin cyclonedx maven plugin
1
11,257
2,644,941,730
IssuesEvent
2015-03-12 19:43:18
acardona/CATMAID
https://api.github.com/repos/acardona/CATMAID
closed
Wrong count in "N outputs" in Measurement table
difficulty: low priority: important type: defect
The number is not consistent with what is shown in the Info box for the skeleton, neither its Presynaptic sites, nor its Downstream skeletons count.
1.0
Wrong count in "N outputs" in Measurement table - The number is not consistent with what is shown in the Info box for the skeleton, neither its Presynaptic sites, nor its Downstream skeletons count.
defect
wrong count in n outputs in measurement table the number is not consistent with what is shown in the info box for the skeleton neither its presynaptic sites nor its downstream skeletons count
1
8,843
2,612,906,860
IssuesEvent
2015-02-27 17:26:11
chrsmith/windows-package-manager
https://api.github.com/repos/chrsmith/windows-package-manager
closed
Dropbox 1.4.9 setup hash change
auto-migrated Milestone-End_Of_Month Type-Defect
``` With Dropbox 1.4.9: Hash sum (SHA1) e0fe168472525ee3c4b066d8c28e84f5a80bf0c2 found, but ca5e1db372e0c3f5fc52c996091f35d1a4108b8b was expected. The file has changed. You already have the update scanner, perhaps a checksum watchdog would be a good idea, too? Or a button in npackd to create reports for common situations like these. ``` Original issue reported on code.google.com by `dtra...@gmail.com` on 1 Aug 2012 at 1:58
1.0
Dropbox 1.4.9 setup hash change - ``` With Dropbox 1.4.9: Hash sum (SHA1) e0fe168472525ee3c4b066d8c28e84f5a80bf0c2 found, but ca5e1db372e0c3f5fc52c996091f35d1a4108b8b was expected. The file has changed. You already have the update scanner, perhaps a checksum watchdog would be a good idea, too? Or a button in npackd to create reports for common situations like these. ``` Original issue reported on code.google.com by `dtra...@gmail.com` on 1 Aug 2012 at 1:58
defect
dropbox setup hash change with dropbox hash sum found but was expected the file has changed you already have the update scanner perhaps a checksum watchdog would be a good idea too or a button in npackd to create reports for common situations like these original issue reported on code google com by dtra gmail com on aug at
1
20,177
3,309,788,662
IssuesEvent
2015-11-05 03:42:24
macvim-dev/macvim
https://api.github.com/repos/macvim-dev/macvim
closed
With Snapshot 74 on OS X ≤ 10.9, no editing window appears
auto-migrated Priority-Medium Type-Defect
_From @GoogleCodeExporter on March 16, 2015 9:26_ ``` MacVim Snapshot 74, OS X 10.9.5. Launch MacVim — no window appears. Invoke ⌘N New Window — no window appears. Note that ⌘, Preferences still works, though. If I run /Applications/MacVim.app/Contents/MacOS/MacVim at the shell, I see the error message (and it hangs until I press Ctrl-C): dyld: Library not loaded: /System/Library/Perl/5.18/darwin-thread-multi-2level/CORE/libperl.dylib Referenced from: /Applications/MacVim.app/Contents/MacOS/Vim Reason: Incompatible library version: Vim requires version 5.18.0 or later, but libperl.dylib provides version 5.16.0 ^C Taking the hint, if I transplant /System/Library/Perl/5.18 from an OS X 10.10 machine onto 10.9, then it works. I consider this to be a bug since 1) Info.plist does not require a Minimum System Version. 2) Nor did MacVim come bundled with Perl 5.18 3) MacVim did not degrade gracefully with the lack of the exact version of Perl. ``` Original issue reported on code.google.com by `dwp...@gmail.com` on 6 Mar 2015 at 8:29 _Copied from original issue: douglasdrumond/macvim#530_
1.0
With Snapshot 74 on OS X ≤ 10.9, no editing window appears - _From @GoogleCodeExporter on March 16, 2015 9:26_ ``` MacVim Snapshot 74, OS X 10.9.5. Launch MacVim — no window appears. Invoke ⌘N New Window — no window appears. Note that ⌘, Preferences still works, though. If I run /Applications/MacVim.app/Contents/MacOS/MacVim at the shell, I see the error message (and it hangs until I press Ctrl-C): dyld: Library not loaded: /System/Library/Perl/5.18/darwin-thread-multi-2level/CORE/libperl.dylib Referenced from: /Applications/MacVim.app/Contents/MacOS/Vim Reason: Incompatible library version: Vim requires version 5.18.0 or later, but libperl.dylib provides version 5.16.0 ^C Taking the hint, if I transplant /System/Library/Perl/5.18 from an OS X 10.10 machine onto 10.9, then it works. I consider this to be a bug since 1) Info.plist does not require a Minimum System Version. 2) Nor did MacVim come bundled with Perl 5.18 3) MacVim did not degrade gracefully with the lack of the exact version of Perl. ``` Original issue reported on code.google.com by `dwp...@gmail.com` on 6 Mar 2015 at 8:29 _Copied from original issue: douglasdrumond/macvim#530_
defect
with snapshot on os x ≤ no editing window appears from googlecodeexporter on march macvim snapshot os x launch macvim — no window appears invoke ⌘n new window — no window appears note that ⌘ preferences still works though if i run applications macvim app contents macos macvim at the shell i see the error message and it hangs until i press ctrl c dyld library not loaded system library perl darwin thread multi core libperl dylib referenced from applications macvim app contents macos vim reason incompatible library version vim requires version or later but libperl dylib provides version c taking the hint if i transplant system library perl from an os x machine onto then it works i consider this to be a bug since info plist does not require a minimum system version nor did macvim come bundled with perl macvim did not degrade gracefully with the lack of the exact version of perl original issue reported on code google com by dwp gmail com on mar at copied from original issue douglasdrumond macvim
1
66,961
20,779,631,359
IssuesEvent
2022-03-16 13:43:42
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
2 selections in Element Android options not translated
T-Defect
### Steps to reproduce 1. Start Element android app 2. log in to your account 3. skip verify process 4. tap te 3 dots (top right) 5. result is visible in the screenshot ![Screenshot_20220316-131112](https://user-images.githubusercontent.com/92784377/158602246-b60dee5d-c285-41a2-955a-043c5a84dee3.png) 6. Two strings in the options menu are not translated in selected german ui: "Do a legacy init sync " and "Do an optimized init sync" ### Outcome #### What did you expect? Fully translated options (selections 3 dots) #### What happened instead? Mixed german and english selections ### Your phone model Google Pixel 6 pro ### Operating system version Android 12 Graphene Os ### Application version and app store 1.4.4 Fdroid ### Homeserver matrix.org ### Will you send logs? No
1.0
2 selections in Element Android options not translated - ### Steps to reproduce 1. Start Element android app 2. log in to your account 3. skip verify process 4. tap te 3 dots (top right) 5. result is visible in the screenshot ![Screenshot_20220316-131112](https://user-images.githubusercontent.com/92784377/158602246-b60dee5d-c285-41a2-955a-043c5a84dee3.png) 6. Two strings in the options menu are not translated in selected german ui: "Do a legacy init sync " and "Do an optimized init sync" ### Outcome #### What did you expect? Fully translated options (selections 3 dots) #### What happened instead? Mixed german and english selections ### Your phone model Google Pixel 6 pro ### Operating system version Android 12 Graphene Os ### Application version and app store 1.4.4 Fdroid ### Homeserver matrix.org ### Will you send logs? No
defect
selections in element android options not translated steps to reproduce start element android app log in to your account skip verify process tap te dots top right result is visible in the screenshot two strings in the options menu are not translated in selected german ui do a legacy init sync and do an optimized init sync outcome what did you expect fully translated options selections dots what happened instead mixed german and english selections your phone model google pixel pro operating system version android graphene os application version and app store fdroid homeserver matrix org will you send logs no
1
98,698
11,093,026,172
IssuesEvent
2019-12-15 23:01:42
opencv/opencv
https://api.github.com/repos/opencv/opencv
closed
Incomplete Python BRIEF Example
category: documentation feature
##### System information (version) - OpenCV => 4.0.1 - Operating System / Platform => MacOS 10.14 Mojave - Compiler => Xcode 10.1 ##### Detailed description The documentation on BRIEF is painfully incomplete. Unlike its peers in the series, not only does it lack explanation for STAR, but it also doesn't show any original or processed image despite the imread in the code. ##### Steps to reproduce You can read the documentation at https://docs.opencv.org/4.0.1/dc/d7d/tutorial_py_brief.html. The Markdown source is https://github.com/opencv/opencv/blob/master/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown.
1.0
Incomplete Python BRIEF Example - ##### System information (version) - OpenCV => 4.0.1 - Operating System / Platform => MacOS 10.14 Mojave - Compiler => Xcode 10.1 ##### Detailed description The documentation on BRIEF is painfully incomplete. Unlike its peers in the series, not only does it lack explanation for STAR, but it also doesn't show any original or processed image despite the imread in the code. ##### Steps to reproduce You can read the documentation at https://docs.opencv.org/4.0.1/dc/d7d/tutorial_py_brief.html. The Markdown source is https://github.com/opencv/opencv/blob/master/doc/py_tutorials/py_feature2d/py_brief/py_brief.markdown.
non_defect
incomplete python brief example system information version opencv operating system platform macos mojave compiler xcode detailed description the documentation on brief is painfully incomplete unlike its peers in the series not only does it lack explanation for star but it also doesn t show any original or processed image despite the imread in the code steps to reproduce you can read the documentation at the markdown source is
0