Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
126,469 | 17,892,210,396 | IssuesEvent | 2021-09-08 02:12:42 | matteobaccan/Web3jClient | https://api.github.com/repos/matteobaccan/Web3jClient | opened | CVE-2020-24750 (High) detected in jackson-databind-2.8.1.jar | security vulnerability | ## CVE-2020-24750 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Web3jClient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- core-0.5.2.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matteobaccan/Web3jClient/commit/fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48">fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to com.pastdev.httpcomponents.configuration.JndiConfiguration.
<p>Publish Date: 2020-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24750>CVE-2020-24750</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-28</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-24750 (High) detected in jackson-databind-2.8.1.jar - ## CVE-2020-24750 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Web3jClient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- core-0.5.2.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matteobaccan/Web3jClient/commit/fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48">fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to com.pastdev.httpcomponents.configuration.JndiConfiguration.
<p>Publish Date: 2020-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24750>CVE-2020-24750</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-28</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy core jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com pastdev httpcomponents configuration jndiconfiguration publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
217 | 5,415,605,487 | IssuesEvent | 2017-03-01 22:00:48 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Sockets test failures SendToAsyncV4IPEndPointToV6Host_NotReceived & BeginSendToV4IPEndPointToV6Host_NotReceived | area-System.Net.Sockets os-mac-os-x tenet-reliability | ```
Test Result (2 failures / +2)
```
System.Net.Sockets.Tests.DualMode.SendToAsyncV4IPEndPointToV6Host_NotReceived
System.Net.Sockets.Tests.DualMode.BeginSendToV4IPEndPointToV6Host_NotReceived
http://dotnet-ci.cloudapp.net/job/dotnet_corefx/job/release_1.0.0/job/osx_debug_prtest/58/testReport/
Regression
System.Net.Sockets.Tests.DualMode.SendToAsyncV4IPEndPointToV6Host_NotReceived (from (empty))
Failing for the past 1 build (Since Failed#58 )
Took 0.31 sec.
Stacktrace
MESSAGE:
Assert.Throws() Failure\nExpected: typeof(System.TimeoutException)\nActual: typeof(Xunit.Sdk.EqualException): Assert.Equal() Failure\nExpected: 1\nActual: 0
+++++++++++++++++++
STACK TRACE:
at System.Net.Sockets.Tests.DualMode.DualModeSendToAsync_IPEndPointToHost_Helper(IPAddress connectTo, IPAddress listenOn, Boolean dualModeServer, Boolean expectedToTimeout) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1359 at System.Net.Sockets.Tests.DualMode.<SendToAsyncV4IPEndPointToV6Host_NotReceived>b__119_0() in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1311
Regression
System.Net.Sockets.Tests.DualMode.BeginSendToV4IPEndPointToV6Host_NotReceived (from (empty))
Failing for the past 1 build (Since Failed#58 )
Took 7 ms.
Stacktrace
MESSAGE:
Assert.Throws() Failure\nExpected: typeof(System.TimeoutException)\nActual: typeof(System.Net.Sockets.SocketException): Connection refused
+++++++++++++++++++
STACK TRACE:
at System.Net.Sockets.Socket.DoBeginSendTo(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint endPointSnapshot, SocketAddress socketAddress, OverlappedAsyncResult asyncResult) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:line 3004 at System.Net.Sockets.Socket.BeginSendTo(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint remoteEP, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:line 2942 at System.Net.Sockets.SocketTaskExtensions.<>c.<SendToAsync>b__12_0(ArraySegment`1 targetBuffer, SocketFlags flags, EndPoint endPoint, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/SocketTaskExtensions.cs:line 235 at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl[TArg1,TArg2,TArg3](Func`6 beginMethod, Func`2 endFunction, Action`1 endAction, TArg1 arg1, TArg2 arg2, TArg3 arg3, Object state, TaskCreationOptions creationOptions) at System.Threading.Tasks.TaskFactory`1.FromAsync[TArg1,TArg2,TArg3](Func`6 beginMethod, Func`2 endMethod, TArg1 arg1, TArg2 arg2, TArg3 arg3, Object state) at System.Net.Sockets.SocketTaskExtensions.SendToAsync(Socket socket, ArraySegment`1 buffer, SocketFlags socketFlags, EndPoint remoteEndPoint) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/SocketTaskExtensions.cs:line 234 at System.Net.Sockets.SocketAPMExtensions.BeginSendTo(Socket socket, Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint remoteEP, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/Common/src/System/Net/Sockets/SocketAPMExtensions.cs:line 565 at System.Net.Sockets.Tests.DualMode.DualModeBeginSendTo_EndPointToHost_Helper(IPAddress connectTo, IPAddress listenOn, Boolean dualModeServer, Boolean expectedToTimeout) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1234 at System.Net.Sockets.Tests.DualMode.<BeginSendToV4IPEndPointToV6Host_NotReceived>b__110_0() in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1200
| True | Sockets test failures SendToAsyncV4IPEndPointToV6Host_NotReceived & BeginSendToV4IPEndPointToV6Host_NotReceived - ```
Test Result (2 failures / +2)
```
System.Net.Sockets.Tests.DualMode.SendToAsyncV4IPEndPointToV6Host_NotReceived
System.Net.Sockets.Tests.DualMode.BeginSendToV4IPEndPointToV6Host_NotReceived
http://dotnet-ci.cloudapp.net/job/dotnet_corefx/job/release_1.0.0/job/osx_debug_prtest/58/testReport/
Regression
System.Net.Sockets.Tests.DualMode.SendToAsyncV4IPEndPointToV6Host_NotReceived (from (empty))
Failing for the past 1 build (Since Failed#58 )
Took 0.31 sec.
Stacktrace
MESSAGE:
Assert.Throws() Failure\nExpected: typeof(System.TimeoutException)\nActual: typeof(Xunit.Sdk.EqualException): Assert.Equal() Failure\nExpected: 1\nActual: 0
+++++++++++++++++++
STACK TRACE:
at System.Net.Sockets.Tests.DualMode.DualModeSendToAsync_IPEndPointToHost_Helper(IPAddress connectTo, IPAddress listenOn, Boolean dualModeServer, Boolean expectedToTimeout) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1359 at System.Net.Sockets.Tests.DualMode.<SendToAsyncV4IPEndPointToV6Host_NotReceived>b__119_0() in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1311
Regression
System.Net.Sockets.Tests.DualMode.BeginSendToV4IPEndPointToV6Host_NotReceived (from (empty))
Failing for the past 1 build (Since Failed#58 )
Took 7 ms.
Stacktrace
MESSAGE:
Assert.Throws() Failure\nExpected: typeof(System.TimeoutException)\nActual: typeof(System.Net.Sockets.SocketException): Connection refused
+++++++++++++++++++
STACK TRACE:
at System.Net.Sockets.Socket.DoBeginSendTo(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint endPointSnapshot, SocketAddress socketAddress, OverlappedAsyncResult asyncResult) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:line 3004 at System.Net.Sockets.Socket.BeginSendTo(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint remoteEP, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:line 2942 at System.Net.Sockets.SocketTaskExtensions.<>c.<SendToAsync>b__12_0(ArraySegment`1 targetBuffer, SocketFlags flags, EndPoint endPoint, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/SocketTaskExtensions.cs:line 235 at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl[TArg1,TArg2,TArg3](Func`6 beginMethod, Func`2 endFunction, Action`1 endAction, TArg1 arg1, TArg2 arg2, TArg3 arg3, Object state, TaskCreationOptions creationOptions) at System.Threading.Tasks.TaskFactory`1.FromAsync[TArg1,TArg2,TArg3](Func`6 beginMethod, Func`2 endMethod, TArg1 arg1, TArg2 arg2, TArg3 arg3, Object state) at System.Net.Sockets.SocketTaskExtensions.SendToAsync(Socket socket, ArraySegment`1 buffer, SocketFlags socketFlags, EndPoint remoteEndPoint) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/src/System/Net/Sockets/SocketTaskExtensions.cs:line 234 at System.Net.Sockets.SocketAPMExtensions.BeginSendTo(Socket socket, Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, EndPoint remoteEP, AsyncCallback callback, Object state) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/Common/src/System/Net/Sockets/SocketAPMExtensions.cs:line 565 at System.Net.Sockets.Tests.DualMode.DualModeBeginSendTo_EndPointToHost_Helper(IPAddress connectTo, IPAddress listenOn, Boolean dualModeServer, Boolean expectedToTimeout) in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1234 at System.Net.Sockets.Tests.DualMode.<BeginSendToV4IPEndPointToV6Host_NotReceived>b__110_0() in /Users/dotnet-bot/j/workspace/dotnet_corefx/release_1.0.0/osx_debug_prtest/src/System.Net.Sockets/tests/FunctionalTests/DualModeSocketTest.cs:line 1200
| non_priority | sockets test failures notreceived notreceived test result failures system net sockets tests dualmode notreceived system net sockets tests dualmode notreceived regression system net sockets tests dualmode notreceived from empty failing for the past build since failed took sec stacktrace message assert throws failure nexpected typeof system timeoutexception nactual typeof xunit sdk equalexception assert equal failure nexpected nactual stack trace at system net sockets tests dualmode dualmodesendtoasync ipendpointtohost helper ipaddress connectto ipaddress listenon boolean dualmodeserver boolean expectedtotimeout in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets tests functionaltests dualmodesockettest cs line at system net sockets tests dualmode b in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets tests functionaltests dualmodesockettest cs line regression system net sockets tests dualmode notreceived from empty failing for the past build since failed took ms stacktrace message assert throws failure nexpected typeof system timeoutexception nactual typeof system net sockets socketexception connection refused stack trace at system net sockets socket dobeginsendto byte buffer offset size socketflags socketflags endpoint endpointsnapshot socketaddress socketaddress overlappedasyncresult asyncresult in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets src system net sockets socket cs line at system net sockets socket beginsendto byte buffer offset size socketflags socketflags endpoint remoteep asynccallback callback object state in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets src system net sockets socket cs line at system net sockets sockettaskextensions c b arraysegment targetbuffer socketflags flags endpoint endpoint asynccallback callback object state in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets src system net sockets sockettaskextensions cs line at system threading tasks taskfactory fromasyncimpl func beginmethod func endfunction action endaction object state taskcreationoptions creationoptions at system threading tasks taskfactory fromasync func beginmethod func endmethod object state at system net sockets sockettaskextensions sendtoasync socket socket arraysegment buffer socketflags socketflags endpoint remoteendpoint in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets src system net sockets sockettaskextensions cs line at system net sockets socketapmextensions beginsendto socket socket byte buffer offset size socketflags socketflags endpoint remoteep asynccallback callback object state in users dotnet bot j workspace dotnet corefx release osx debug prtest src common src system net sockets socketapmextensions cs line at system net sockets tests dualmode dualmodebeginsendto endpointtohost helper ipaddress connectto ipaddress listenon boolean dualmodeserver boolean expectedtotimeout in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets tests functionaltests dualmodesockettest cs line at system net sockets tests dualmode b in users dotnet bot j workspace dotnet corefx release osx debug prtest src system net sockets tests functionaltests dualmodesockettest cs line | 0 |
242,583 | 26,277,738,316 | IssuesEvent | 2023-01-07 01:04:20 | mgh3326/nuber-eats-frontend | https://api.github.com/repos/mgh3326/nuber-eats-frontend | opened | CVE-2021-23382 (High) detected in postcss-8.2.5.tgz | security vulnerability | ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-8.2.5.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.5.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- :x: **postcss-8.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/nuber-eats-frontend/commit/1046064bc2bf5eb7a13909733eb987e81fbf1bbb">1046064bc2bf5eb7a13909733eb987e81fbf1bbb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23382 (High) detected in postcss-8.2.5.tgz - ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-8.2.5.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.5.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- :x: **postcss-8.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/nuber-eats-frontend/commit/1046064bc2bf5eb7a13909733eb987e81fbf1bbb">1046064bc2bf5eb7a13909733eb987e81fbf1bbb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in postcss tgz cve high severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules postcss package json dependency hierarchy x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
109,260 | 13,756,693,426 | IssuesEvent | 2020-10-06 20:21:12 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [Design] Conduct My Documents usability testing sessions | design eFolders research vsa vsa-benefits-2 | ## Issue Description
_In order to validate that the My Documents design is intuitive to our user base, we need to a conduct task-based usability study to verify the UI and uncover opportunities for improvement._
---
## Tasks
- [ ] _Conduct usability test sessions_
- [ ] _Upload transcripts (scrubbed of PII) to GH_
- [ ] _Upload a topline summary to GH_
## Acceptance Criteria
- [ ] _Usability test sessions are completed_
- [ ] _Transcripts have been scrubbed of PII and uploaded to GH_
- [ ] _A link to the topline summary has been provided as a comment on this ticket_ | 1.0 | [Design] Conduct My Documents usability testing sessions - ## Issue Description
_In order to validate that the My Documents design is intuitive to our user base, we need to a conduct task-based usability study to verify the UI and uncover opportunities for improvement._
---
## Tasks
- [ ] _Conduct usability test sessions_
- [ ] _Upload transcripts (scrubbed of PII) to GH_
- [ ] _Upload a topline summary to GH_
## Acceptance Criteria
- [ ] _Usability test sessions are completed_
- [ ] _Transcripts have been scrubbed of PII and uploaded to GH_
- [ ] _A link to the topline summary has been provided as a comment on this ticket_ | non_priority | conduct my documents usability testing sessions issue description in order to validate that the my documents design is intuitive to our user base we need to a conduct task based usability study to verify the ui and uncover opportunities for improvement tasks conduct usability test sessions upload transcripts scrubbed of pii to gh upload a topline summary to gh acceptance criteria usability test sessions are completed transcripts have been scrubbed of pii and uploaded to gh a link to the topline summary has been provided as a comment on this ticket | 0 |
140,755 | 18,920,688,661 | IssuesEvent | 2021-11-17 01:03:12 | billmcchesney1/concord | https://api.github.com/repos/billmcchesney1/concord | opened | CVE-2021-23436 (High) detected in immer-1.10.0.tgz | security vulnerability | ## CVE-2021-23436 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary>
<p>Create your next immutable state by mutating the current one</p>
<p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p>
<p>Path to dependency file: concord/console2/package.json</p>
<p>Path to vulnerable library: concord/console2/node_modules/immer/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- react-dev-utils-10.2.1.tgz
- :x: **immer-1.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package immer before 9.0.6. A type confusion vulnerability can lead to a bypass of CVE-2020-28477 when the user-provided keys used in the path parameter are arrays. In particular, this bypass is possible because the condition (p === "__proto__" || p === "constructor") in applyPatches_ returns false if p is ['__proto__'] (or ['constructor']). The === operator (strict equality operator) returns false if the operands have different type.
<p>Publish Date: 2021-09-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23436>CVE-2021-23436</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23436">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23436</a></p>
<p>Release Date: 2021-09-01</p>
<p>Fix Resolution: immer - 9.0.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"immer","packageVersion":"1.10.0","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.4.3;react-dev-utils:10.2.1;immer:1.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"immer - 9.0.6"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23436","vulnerabilityDetails":"This affects the package immer before 9.0.6. A type confusion vulnerability can lead to a bypass of CVE-2020-28477 when the user-provided keys used in the path parameter are arrays. In particular, this bypass is possible because the condition (p \u003d\u003d\u003d \"__proto__\" || p \u003d\u003d\u003d \"constructor\") in applyPatches_ returns false if p is [\u0027__proto__\u0027] (or [\u0027constructor\u0027]). The \u003d\u003d\u003d operator (strict equality operator) returns false if the operands have different type.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23436","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23436 (High) detected in immer-1.10.0.tgz - ## CVE-2021-23436 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary>
<p>Create your next immutable state by mutating the current one</p>
<p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p>
<p>Path to dependency file: concord/console2/package.json</p>
<p>Path to vulnerable library: concord/console2/node_modules/immer/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- react-dev-utils-10.2.1.tgz
- :x: **immer-1.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package immer before 9.0.6. A type confusion vulnerability can lead to a bypass of CVE-2020-28477 when the user-provided keys used in the path parameter are arrays. In particular, this bypass is possible because the condition (p === "__proto__" || p === "constructor") in applyPatches_ returns false if p is ['__proto__'] (or ['constructor']). The === operator (strict equality operator) returns false if the operands have different type.
<p>Publish Date: 2021-09-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23436>CVE-2021-23436</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23436">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23436</a></p>
<p>Release Date: 2021-09-01</p>
<p>Fix Resolution: immer - 9.0.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"immer","packageVersion":"1.10.0","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.4.3;react-dev-utils:10.2.1;immer:1.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"immer - 9.0.6"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23436","vulnerabilityDetails":"This affects the package immer before 9.0.6. A type confusion vulnerability can lead to a bypass of CVE-2020-28477 when the user-provided keys used in the path parameter are arrays. In particular, this bypass is possible because the condition (p \u003d\u003d\u003d \"__proto__\" || p \u003d\u003d\u003d \"constructor\") in applyPatches_ returns false if p is [\u0027__proto__\u0027] (or [\u0027constructor\u0027]). The \u003d\u003d\u003d operator (strict equality operator) returns false if the operands have different type.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23436","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in immer tgz cve high severity vulnerability vulnerable library immer tgz create your next immutable state by mutating the current one library home page a href path to dependency file concord package json path to vulnerable library concord node modules immer package json dependency hierarchy react scripts tgz root library react dev utils tgz x immer tgz vulnerable library found in base branch master vulnerability details this affects the package immer before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays in particular this bypass is possible because the condition p proto p constructor in applypatches returns false if p is or the operator strict equality operator returns false if the operands have different type publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution immer isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts react dev utils immer isminimumfixversionavailable true minimumfixversion immer basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package immer before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays in particular this bypass is possible because the condition p proto p constructor in applypatches returns false if p is or the operator strict equality operator returns false if the operands have different type vulnerabilityurl | 0 |
215,120 | 24,126,435,045 | IssuesEvent | 2022-09-21 01:10:08 | Killy85/game_ai_trainer | https://api.github.com/repos/Killy85/game_ai_trainer | opened | CVE-2022-35998 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2022-35998 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `EmptyTensorList` receives an input `element_shape` with more than one dimension, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c8ba76d48567aed347508e0552a257641931024d. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35998>CVE-2022-35998</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-qhw4-wwr7-gjc5">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-qhw4-wwr7-gjc5</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-35998 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-35998 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `EmptyTensorList` receives an input `element_shape` with more than one dimension, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c8ba76d48567aed347508e0552a257641931024d. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35998>CVE-2022-35998</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-qhw4-wwr7-gjc5">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-qhw4-wwr7-gjc5</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning if emptytensorlist receives an input element shape with more than one dimension it gives a check fail that can be used to trigger a denial of service attack we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend | 0 |
147,190 | 13,202,668,755 | IssuesEvent | 2020-08-14 12:47:27 | proyecto7000/gimnasio | https://api.github.com/repos/proyecto7000/gimnasio | opened | Colocar colores formales a la pagina | documentation | La página se ve muy apagada. Hay que colocar más colores.
Ejemplo Formato MarkDown
**Negrita**
_Cursiva_
* item
* item2
* item3
`public void main()` | 1.0 | Colocar colores formales a la pagina - La página se ve muy apagada. Hay que colocar más colores.
Ejemplo Formato MarkDown
**Negrita**
_Cursiva_
* item
* item2
* item3
`public void main()` | non_priority | colocar colores formales a la pagina la página se ve muy apagada hay que colocar más colores ejemplo formato markdown negrita cursiva item public void main | 0 |
82,550 | 10,257,904,143 | IssuesEvent | 2019-08-21 21:16:49 | standardhealth/shr_design | https://api.github.com/repos/standardhealth/shr_design | closed | SHR as Language: Signs, Meaning, Code, and a Way Forward | design rant | ### Background
Language is traditionally seen as consisting of three parts: signs, meanings, and a code connecting signs with their meanings. **Signs** are composed of symbols which are encoded and transmitted by the sender through a channel to the receiver, where signs are decoded. [[1](https://en.wikipedia.org/wiki/Language#Structure)] **Meaning** is what the source or sender expresses, communicates, or conveys in their message to the observer or receiver, and what the receiver infers from the current context. [[2](https://en.wikipedia.org/wiki/Meaning_%28linguistics%29)] A **Code** is a system of rules to convert information into another form or representation [[3](https://en.wikipedia.org/wiki/Code)]
In language, **grammar** is the code, or a system of rules, which governs the form of the statements in a given language. It encompasses
- **morphology** the formation and composition of words
- **syntax** the formation and composition of phrases and sentences from words
- and **phonology** in spoken language, how sounds or gestures function together
### Proposed Scope of the SHR Grammar
Use of the SHR Grammar, in the linguistic sense of the term, to constrain use of a controlled vocabulary to unambiguously represent the concepts that compose the Standard Health Record.
In scope are issues of:
- **Content** e.g. What concepts is the SHR composed of?
- **Syntax** e.g. How are statements within the SHR structured, unambiguously, without violating boundaries between the information model and the terminologies?
- **Diction** e.g. What symbols (vocabulary) are used to express the concepts within the SHR?
- **Morphology** e.g. How are symbols best combined to express concepts in the SHR (i.e. pre- versus post-coordination)?
Out of scope:
- Phonology, (irrelevant)
- Semantics, the study of the meaning of words (lexical semantics) and fixed word combinations (phraseology), and how these combine to form the meanings of sentences [[4](https://en.wikipedia.org/wiki/Linguistics)]. The province of the terminology developers.
- Semiotics, the study of meaning-making, the study of sign processes and meaningful communication, includes study of indication, designation, likeness, etc. [[5](https://en.wikipedia.org/wiki/Semiotics)]. The province of developers in the fields of NLP, AI, CDS, etc.
- Pragmatics, the study of how utterances are used in communicative acts, and the role played by context and non-linguistic knowledge in the transmission of meaning [[4](https://en.wikipedia.org/wiki/Linguistics)]. The province of Implementation Scientists.
### Technical considerations
- SNOMED-CT, RxNORM, ICD-10 (soon ICD-11), LOINC are examples of important terminologies that any workable near-term solution to health data interoperability must leverage. All are represented in the UMLS Metathesaurus [[6](https://www.nlm.nih.gov/pubs/factsheets/umlsmeta.html)]
- The OBO Foundry aims to develop a family of interoperable ontologies that are both logically well-formed and scientifically accurate for various domains within the biological sciences. These ontologies, or the product of a similar effort, will be crucial to leverage insights from upcoming -omics revolutions (e.g. genomics, proteomic, microbiomics, metabolomics, etc.) in routine patient care. _These ontologies lack the comprehensiveness and interoperability to represent all concepts in the SHR at their current levels of maturity._
- The limitations of the SNOMED-CT semantic model are well described, e.g.[[7](http://www.ncbi.nlm.nih.gov/pubmed/18789754)],[[8](http://www.ncbi.nlm.nih.gov/pubmed/22024315)],[[9](http://www.ncbi.nlm.nih.gov/pubmed/21515545)], will likely be addressed at some point in the future, and pose no short term barriers to semantic interoperability if use of the terminology can be uniformly constrained. In particular, future iterations of SNOMED-CT may link it semantic structure to a top-level ontology, such as BFO [[10](http://ifomis.uni-saarland.de/bfo/)].
- "The reliability of SNOMED-CT coding is imperfect, and is in part a result of the terminology itself, the lack of _a model for articulating rules of use for the terminology_, as well as the absence of _a model that formalizes SNOMED-CT’s semantic structure_ in a manner more reflective of clinical use cases" [[11](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3243131/pdf/0435_amia_2011_proc.pdf)]. The SHR Grammar is suited to this task.
- Multiple external ontologies/terminologies can be linked [[12](http://sigpubs.biostr.washington.edu/archive/00000236/01/gennari_icbo_2009.pdf)], leveraging an ontology of connection types specified in an existing ontology of relations within the bio sciences [[13](http://www.obofoundry.org/ontology/ro.html)], with connections maintained in a centralized database of annotations, allowing for maximum flexibility in post-coordination and processing in an environment external to the SHR Grammar.
- If the scope of the SHR Grammar is limited to the linguistic domains of diction, syntax, and morphology, questions of data processing can be pushed into the post-coordination domain, where widespread SHR adoption will result in a large and expanding corpus of uniformly represented data, enabling collaboration and acceleration of learning.
### Way Forward
- Represent all current SHR spreadsheet entries in ANTRL to provide a structured, computable basis for further development.
- Continue parallel development of a visualization layer to inspect SHR content.
- Identify candidate vocabularies for use within the SHR specification, specifying rules governing the use of each, likely relying upon UMLS concepts, SNOMED-CT terms, possibly the Foundational Model of Anatomy [[14](http://si.washington.edu/projects/fma)]).
- Refine SHR specification, identifying pattern-based rules that can be replicated throughout the grammar, based on consistent use of the most appropriate vocabulary for a given situation, at the most appropriate level of granularity.
- Leverage UMLS as a linkage between concepts in the SHR specification and evolving vocabularies and ontologies.
- Engineer a solution to represent SHR Specification in FHIR profiles.
- Engage clinicians, likely through clinical specialty societies, in authoring of additional SHR specification content.
| 1.0 | SHR as Language: Signs, Meaning, Code, and a Way Forward - ### Background
Language is traditionally seen as consisting of three parts: signs, meanings, and a code connecting signs with their meanings. **Signs** are composed of symbols which are encoded and transmitted by the sender through a channel to the receiver, where signs are decoded. [[1](https://en.wikipedia.org/wiki/Language#Structure)] **Meaning** is what the source or sender expresses, communicates, or conveys in their message to the observer or receiver, and what the receiver infers from the current context. [[2](https://en.wikipedia.org/wiki/Meaning_%28linguistics%29)] A **Code** is a system of rules to convert information into another form or representation [[3](https://en.wikipedia.org/wiki/Code)]
In language, **grammar** is the code, or a system of rules, which governs the form of the statements in a given language. It encompasses
- **morphology** the formation and composition of words
- **syntax** the formation and composition of phrases and sentences from words
- and **phonology** in spoken language, how sounds or gestures function together
### Proposed Scope of the SHR Grammar
Use of the SHR Grammar, in the linguistic sense of the term, to constrain use of a controlled vocabulary to unambiguously represent the concepts that compose the Standard Health Record.
In scope are issues of:
- **Content** e.g. What concepts is the SHR composed of?
- **Syntax** e.g. How are statements within the SHR structured, unambiguously, without violating boundaries between the information model and the terminologies?
- **Diction** e.g. What symbols (vocabulary) are used to express the concepts within the SHR?
- **Morphology** e.g. How are symbols best combined to express concepts in the SHR (i.e. pre- versus post-coordination)?
Out of scope:
- Phonology, (irrelevant)
- Semantics, the study of the meaning of words (lexical semantics) and fixed word combinations (phraseology), and how these combine to form the meanings of sentences [[4](https://en.wikipedia.org/wiki/Linguistics)]. The province of the terminology developers.
- Semiotics, the study of meaning-making, the study of sign processes and meaningful communication, includes study of indication, designation, likeness, etc. [[5](https://en.wikipedia.org/wiki/Semiotics)]. The province of developers in the fields of NLP, AI, CDS, etc.
- Pragmatics, the study of how utterances are used in communicative acts, and the role played by context and non-linguistic knowledge in the transmission of meaning [[4](https://en.wikipedia.org/wiki/Linguistics)]. The province of Implementation Scientists.
### Technical considerations
- SNOMED-CT, RxNORM, ICD-10 (soon ICD-11), LOINC are examples of important terminologies that any workable near-term solution to health data interoperability must leverage. All are represented in the UMLS Metathesaurus [[6](https://www.nlm.nih.gov/pubs/factsheets/umlsmeta.html)]
- The OBO Foundry aims to develop a family of interoperable ontologies that are both logically well-formed and scientifically accurate for various domains within the biological sciences. These ontologies, or the product of a similar effort, will be crucial to leverage insights from upcoming -omics revolutions (e.g. genomics, proteomic, microbiomics, metabolomics, etc.) in routine patient care. _These ontologies lack the comprehensiveness and interoperability to represent all concepts in the SHR at their current levels of maturity._
- The limitations of the SNOMED-CT semantic model are well described, e.g.[[7](http://www.ncbi.nlm.nih.gov/pubmed/18789754)],[[8](http://www.ncbi.nlm.nih.gov/pubmed/22024315)],[[9](http://www.ncbi.nlm.nih.gov/pubmed/21515545)], will likely be addressed at some point in the future, and pose no short term barriers to semantic interoperability if use of the terminology can be uniformly constrained. In particular, future iterations of SNOMED-CT may link it semantic structure to a top-level ontology, such as BFO [[10](http://ifomis.uni-saarland.de/bfo/)].
- "The reliability of SNOMED-CT coding is imperfect, and is in part a result of the terminology itself, the lack of _a model for articulating rules of use for the terminology_, as well as the absence of _a model that formalizes SNOMED-CT’s semantic structure_ in a manner more reflective of clinical use cases" [[11](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3243131/pdf/0435_amia_2011_proc.pdf)]. The SHR Grammar is suited to this task.
- Multiple external ontologies/terminologies can be linked [[12](http://sigpubs.biostr.washington.edu/archive/00000236/01/gennari_icbo_2009.pdf)], leveraging an ontology of connection types specified in an existing ontology of relations within the bio sciences [[13](http://www.obofoundry.org/ontology/ro.html)], with connections maintained in a centralized database of annotations, allowing for maximum flexibility in post-coordination and processing in an environment external to the SHR Grammar.
- If the scope of the SHR Grammar is limited to the linguistic domains of diction, syntax, and morphology, questions of data processing can be pushed into the post-coordination domain, where widespread SHR adoption will result in a large and expanding corpus of uniformly represented data, enabling collaboration and acceleration of learning.
### Way Forward
- Represent all current SHR spreadsheet entries in ANTRL to provide a structured, computable basis for further development.
- Continue parallel development of a visualization layer to inspect SHR content.
- Identify candidate vocabularies for use within the SHR specification, specifying rules governing the use of each, likely relying upon UMLS concepts, SNOMED-CT terms, possibly the Foundational Model of Anatomy [[14](http://si.washington.edu/projects/fma)]).
- Refine SHR specification, identifying pattern-based rules that can be replicated throughout the grammar, based on consistent use of the most appropriate vocabulary for a given situation, at the most appropriate level of granularity.
- Leverage UMLS as a linkage between concepts in the SHR specification and evolving vocabularies and ontologies.
- Engineer a solution to represent SHR Specification in FHIR profiles.
- Engage clinicians, likely through clinical specialty societies, in authoring of additional SHR specification content.
| non_priority | shr as language signs meaning code and a way forward background language is traditionally seen as consisting of three parts signs meanings and a code connecting signs with their meanings signs are composed of symbols which are encoded and transmitted by the sender through a channel to the receiver where signs are decoded meaning is what the source or sender expresses communicates or conveys in their message to the observer or receiver and what the receiver infers from the current context a code is a system of rules to convert information into another form or representation in language grammar is the code or a system of rules which governs the form of the statements in a given language it encompasses morphology the formation and composition of words syntax the formation and composition of phrases and sentences from words and phonology in spoken language how sounds or gestures function together proposed scope of the shr grammar use of the shr grammar in the linguistic sense of the term to constrain use of a controlled vocabulary to unambiguously represent the concepts that compose the standard health record in scope are issues of content e g what concepts is the shr composed of syntax e g how are statements within the shr structured unambiguously without violating boundaries between the information model and the terminologies diction e g what symbols vocabulary are used to express the concepts within the shr morphology e g how are symbols best combined to express concepts in the shr i e pre versus post coordination out of scope phonology irrelevant semantics the study of the meaning of words lexical semantics and fixed word combinations phraseology and how these combine to form the meanings of sentences the province of the terminology developers semiotics the study of meaning making the study of sign processes and meaningful communication includes study of indication designation likeness etc the province of developers in the fields of nlp ai cds etc pragmatics the study of how utterances are used in communicative acts and the role played by context and non linguistic knowledge in the transmission of meaning the province of implementation scientists technical considerations snomed ct rxnorm icd soon icd loinc are examples of important terminologies that any workable near term solution to health data interoperability must leverage all are represented in the umls metathesaurus the obo foundry aims to develop a family of interoperable ontologies that are both logically well formed and scientifically accurate for various domains within the biological sciences these ontologies or the product of a similar effort will be crucial to leverage insights from upcoming omics revolutions e g genomics proteomic microbiomics metabolomics etc in routine patient care these ontologies lack the comprehensiveness and interoperability to represent all concepts in the shr at their current levels of maturity the limitations of the snomed ct semantic model are well described e g will likely be addressed at some point in the future and pose no short term barriers to semantic interoperability if use of the terminology can be uniformly constrained in particular future iterations of snomed ct may link it semantic structure to a top level ontology such as bfo the reliability of snomed ct coding is imperfect and is in part a result of the terminology itself the lack of a model for articulating rules of use for the terminology as well as the absence of a model that formalizes snomed ct’s semantic structure in a manner more reflective of clinical use cases the shr grammar is suited to this task multiple external ontologies terminologies can be linked leveraging an ontology of connection types specified in an existing ontology of relations within the bio sciences with connections maintained in a centralized database of annotations allowing for maximum flexibility in post coordination and processing in an environment external to the shr grammar if the scope of the shr grammar is limited to the linguistic domains of diction syntax and morphology questions of data processing can be pushed into the post coordination domain where widespread shr adoption will result in a large and expanding corpus of uniformly represented data enabling collaboration and acceleration of learning way forward represent all current shr spreadsheet entries in antrl to provide a structured computable basis for further development continue parallel development of a visualization layer to inspect shr content identify candidate vocabularies for use within the shr specification specifying rules governing the use of each likely relying upon umls concepts snomed ct terms possibly the foundational model of anatomy refine shr specification identifying pattern based rules that can be replicated throughout the grammar based on consistent use of the most appropriate vocabulary for a given situation at the most appropriate level of granularity leverage umls as a linkage between concepts in the shr specification and evolving vocabularies and ontologies engineer a solution to represent shr specification in fhir profiles engage clinicians likely through clinical specialty societies in authoring of additional shr specification content | 0 |
49,297 | 7,493,977,773 | IssuesEvent | 2018-04-07 03:07:37 | coala/coala | https://api.github.com/repos/coala/coala | closed | Modify installing from git instructions in development_setup documentation | area/documentation difficulty/newcomer | https://github.com/coala/coala/blob/master/docs/Developers/Development_Setup.rst#installing-from-git
Change ```cd -``` to ```cd ..```
This makes it compatible with windows cmd as well
(and also will run in case the OLDPWD variable is not set by chance in any linux environment)
| 1.0 | Modify installing from git instructions in development_setup documentation - https://github.com/coala/coala/blob/master/docs/Developers/Development_Setup.rst#installing-from-git
Change ```cd -``` to ```cd ..```
This makes it compatible with windows cmd as well
(and also will run in case the OLDPWD variable is not set by chance in any linux environment)
| non_priority | modify installing from git instructions in development setup documentation change cd to cd this makes it compatible with windows cmd as well and also will run in case the oldpwd variable is not set by chance in any linux environment | 0 |
76,179 | 26,276,535,522 | IssuesEvent | 2023-01-06 22:48:59 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | Linux Kernel 6.2 removed bio_set_op_attrs | Type: Defect | ### System information
Type | Version/Name
--- | ---
Distribution Name | Gentoo
Distribution Version | -
Kernel Version | `next-20221220`
Architecture | LoongArch
OpenZFS Version | 2.1.99-1641_gc935fe2e9
### Describe the problem you're observing
```
In file included from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/spl/sys/uio.h:31,
from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/spl/sys/sunddi.h:28,
from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/module/os/linux/spl/spl-generic.c:42:
/var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/kernel/linux/blkdev_compat.h: In function ‘bio_set_op_attrs’:
/var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/kernel/linux/blkdev_compat.h:397:12: error: ‘struct bio’ has no member named ‘bi_rw’
397 | bio->bi_rw |= rw | flags;
| ^~
make[4]: *** [scripts/Makefile.build:250: /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/module/os/linux/spl/spl-generic.o] Error 1
```
⚠️ ⚠️ **Linux Kernel renamed bio `bi_rw` to `bi_opf`: https://github.com/torvalds/linux/commit/1eff9d322a444245c67515edb52bc0eb68374aa8** ⚠️ ⚠️
⚠️ ⚠️ **and it has recently dropped `bio_set_op_attrs`: https://github.com/torvalds/linux/commit/c34b7ac65087554627f4840f4ecd6f2107a68fd1** ⚠️ ⚠️
### Describe how to reproduce the problem
Build oepnzfs master with a post-6.1 kernel.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
N/A | 1.0 | Linux Kernel 6.2 removed bio_set_op_attrs - ### System information
Type | Version/Name
--- | ---
Distribution Name | Gentoo
Distribution Version | -
Kernel Version | `next-20221220`
Architecture | LoongArch
OpenZFS Version | 2.1.99-1641_gc935fe2e9
### Describe the problem you're observing
```
In file included from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/spl/sys/uio.h:31,
from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/spl/sys/sunddi.h:28,
from /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/module/os/linux/spl/spl-generic.c:42:
/var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/kernel/linux/blkdev_compat.h: In function ‘bio_set_op_attrs’:
/var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/include/os/linux/kernel/linux/blkdev_compat.h:397:12: error: ‘struct bio’ has no member named ‘bi_rw’
397 | bio->bi_rw |= rw | flags;
| ^~
make[4]: *** [scripts/Makefile.build:250: /var/tmp/portage/sys-fs/zfs-loong-kmod-9999/work/zfs-loong-kmod-9999/module/os/linux/spl/spl-generic.o] Error 1
```
⚠️ ⚠️ **Linux Kernel renamed bio `bi_rw` to `bi_opf`: https://github.com/torvalds/linux/commit/1eff9d322a444245c67515edb52bc0eb68374aa8** ⚠️ ⚠️
⚠️ ⚠️ **and it has recently dropped `bio_set_op_attrs`: https://github.com/torvalds/linux/commit/c34b7ac65087554627f4840f4ecd6f2107a68fd1** ⚠️ ⚠️
### Describe how to reproduce the problem
Build oepnzfs master with a post-6.1 kernel.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
N/A | non_priority | linux kernel removed bio set op attrs system information type version name distribution name gentoo distribution version kernel version next architecture loongarch openzfs version describe the problem you re observing in file included from var tmp portage sys fs zfs loong kmod work zfs loong kmod include os linux spl sys uio h from var tmp portage sys fs zfs loong kmod work zfs loong kmod include os linux spl sys sunddi h from var tmp portage sys fs zfs loong kmod work zfs loong kmod module os linux spl spl generic c var tmp portage sys fs zfs loong kmod work zfs loong kmod include os linux kernel linux blkdev compat h in function ‘bio set op attrs’ var tmp portage sys fs zfs loong kmod work zfs loong kmod include os linux kernel linux blkdev compat h error ‘struct bio’ has no member named ‘bi rw’ bio bi rw rw flags make error ⚠️ ⚠️ linux kernel renamed bio bi rw to bi opf ⚠️ ⚠️ ⚠️ ⚠️ and it has recently dropped bio set op attrs ⚠️ ⚠️ describe how to reproduce the problem build oepnzfs master with a post kernel include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with n a | 0 |
38,640 | 19,471,285,442 | IssuesEvent | 2021-12-24 01:52:58 | google/iree | https://api.github.com/repos/google/iree | opened | Large performance regression in tosa conv ops after upstream merge. | performance ⚡ | This merge commit caused a significant regression in tosa models with conv ops:
https://github.com/google/iree/commit/63a4724e168243a4697d654e90eb521a5e293bcf

From looking at the IR before/after the merge it seems like conv2d handling changed to now not turn them into fully connected ops:

Which then ends up as `linalg.conv_2d_nhwc_hwcf` instead of `linalg.matmul` like before.
I don't know enough to know what layer should be handling this - it feels like if there's such an extreme performance difference we'd want something at the linalg level to take the linalg conv op into matmuls instead of relying on that to happen at the frontend.
| True | Large performance regression in tosa conv ops after upstream merge. - This merge commit caused a significant regression in tosa models with conv ops:
https://github.com/google/iree/commit/63a4724e168243a4697d654e90eb521a5e293bcf

From looking at the IR before/after the merge it seems like conv2d handling changed to now not turn them into fully connected ops:

Which then ends up as `linalg.conv_2d_nhwc_hwcf` instead of `linalg.matmul` like before.
I don't know enough to know what layer should be handling this - it feels like if there's such an extreme performance difference we'd want something at the linalg level to take the linalg conv op into matmuls instead of relying on that to happen at the frontend.
| non_priority | large performance regression in tosa conv ops after upstream merge this merge commit caused a significant regression in tosa models with conv ops from looking at the ir before after the merge it seems like handling changed to now not turn them into fully connected ops which then ends up as linalg conv nhwc hwcf instead of linalg matmul like before i don t know enough to know what layer should be handling this it feels like if there s such an extreme performance difference we d want something at the linalg level to take the linalg conv op into matmuls instead of relying on that to happen at the frontend | 0 |
236,136 | 19,516,213,211 | IssuesEvent | 2021-12-29 10:39:56 | dusk-network/dusk-blockchain | https://api.github.com/repos/dusk-network/dusk-blockchain | closed | Implement graceful shutdown for Kadcast peer | mark:testnet | **Describe the bug**
`stream.Recv()` should be cancelable. With current impl, cancelling Reader context will not work if stream.Recv blocks infinitely. In addition, both Reader and Writer should use same context instance.
| 1.0 | Implement graceful shutdown for Kadcast peer - **Describe the bug**
`stream.Recv()` should be cancelable. With current impl, cancelling Reader context will not work if stream.Recv blocks infinitely. In addition, both Reader and Writer should use same context instance.
| non_priority | implement graceful shutdown for kadcast peer describe the bug stream recv should be cancelable with current impl cancelling reader context will not work if stream recv blocks infinitely in addition both reader and writer should use same context instance | 0 |
39,298 | 5,072,091,870 | IssuesEvent | 2016-12-26 19:10:46 | USGS-CIDA/metab_tests | https://api.github.com/repos/USGS-CIDA/metab_tests | closed | Experiment with model approaches | design factor | Leading models:
- the non-state-space, "shortcut" model by Charles
- hierarchical state-space model run with Stan
Others:
- MLE-PRK (observation error only)
- nighttime regression plus MLE-PR, probably linked by a K~Q regression
- MLE-PRK with process error only
- Bayesian with observation error only
| 1.0 | Experiment with model approaches - Leading models:
- the non-state-space, "shortcut" model by Charles
- hierarchical state-space model run with Stan
Others:
- MLE-PRK (observation error only)
- nighttime regression plus MLE-PR, probably linked by a K~Q regression
- MLE-PRK with process error only
- Bayesian with observation error only
| non_priority | experiment with model approaches leading models the non state space shortcut model by charles hierarchical state space model run with stan others mle prk observation error only nighttime regression plus mle pr probably linked by a k q regression mle prk with process error only bayesian with observation error only | 0 |
114,495 | 24,609,685,591 | IssuesEvent | 2022-10-14 19:56:58 | lucasferreiram3/PyGoat | https://api.github.com/repos/lucasferreiram3/PyGoat | opened | Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) [VID:80:pygoat/introduction/apis.py:83] | VeracodeFlaw: Medium Veracode Pipeline Scan | **Filename:** pygoat/introduction/apis.py
**Line:** 83
**CWE:** 80 (Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS))
<span>This call to django.http.JsonResponse() contains a cross-site scripting (XSS) flaw. The application populates the HTTP response with user-supplied input, allowing an attacker to embed malicious content, such as Javascript code, which will be executed in the context of the victim's browser. XSS vulnerabilities are commonly exploited to steal or manipulate cookies, modify presentation of content, and compromise confidential information, with new attack vectors being discovered on a regular basis. </span> <span>Use contextual escaping on all untrusted data before using it to construct any portion of an HTTP response. The escaping method should be chosen based on the specific use case of the untrusted data, otherwise it may not protect fully against the attack. For example, if the data is being written to the body of an HTML page, use HTML entity escaping; if the data is being written to an attribute, use attribute escaping; etc. Both the OWASP Java Encoder library and the Microsoft AntiXSS library provide contextual escaping methods. For more details on contextual escaping, see https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html. In addition, as a best practice, always validate user-supplied input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/79.html">CWE</a> <a href="https://owasp.org/www-community/attacks/xss/">OWASP</a> <a href="https://help.veracode.com/go/review_cleansers">Supported Cleansers</a></span>
| 2.0 | Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) [VID:80:pygoat/introduction/apis.py:83] - **Filename:** pygoat/introduction/apis.py
**Line:** 83
**CWE:** 80 (Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS))
<span>This call to django.http.JsonResponse() contains a cross-site scripting (XSS) flaw. The application populates the HTTP response with user-supplied input, allowing an attacker to embed malicious content, such as Javascript code, which will be executed in the context of the victim's browser. XSS vulnerabilities are commonly exploited to steal or manipulate cookies, modify presentation of content, and compromise confidential information, with new attack vectors being discovered on a regular basis. </span> <span>Use contextual escaping on all untrusted data before using it to construct any portion of an HTTP response. The escaping method should be chosen based on the specific use case of the untrusted data, otherwise it may not protect fully against the attack. For example, if the data is being written to the body of an HTML page, use HTML entity escaping; if the data is being written to an attribute, use attribute escaping; etc. Both the OWASP Java Encoder library and the Microsoft AntiXSS library provide contextual escaping methods. For more details on contextual escaping, see https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html. In addition, as a best practice, always validate user-supplied input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/79.html">CWE</a> <a href="https://owasp.org/www-community/attacks/xss/">OWASP</a> <a href="https://help.veracode.com/go/review_cleansers">Supported Cleansers</a></span>
| non_priority | improper neutralization of script related html tags in a web page basic xss filename pygoat introduction apis py line cwe improper neutralization of script related html tags in a web page basic xss this call to django http jsonresponse contains a cross site scripting xss flaw the application populates the http response with user supplied input allowing an attacker to embed malicious content such as javascript code which will be executed in the context of the victim s browser xss vulnerabilities are commonly exploited to steal or manipulate cookies modify presentation of content and compromise confidential information with new attack vectors being discovered on a regular basis use contextual escaping on all untrusted data before using it to construct any portion of an http response the escaping method should be chosen based on the specific use case of the untrusted data otherwise it may not protect fully against the attack for example if the data is being written to the body of an html page use html entity escaping if the data is being written to an attribute use attribute escaping etc both the owasp java encoder library and the microsoft antixss library provide contextual escaping methods for more details on contextual escaping see in addition as a best practice always validate user supplied input to ensure that it conforms to the expected format using centralized data validation routines when possible references | 0 |
111,188 | 11,726,360,025 | IssuesEvent | 2020-03-10 14:24:29 | dillon435/Activities-Project | https://api.github.com/repos/dillon435/Activities-Project | closed | boards | documentation | Your boards look like you have completed the project. This may be true. If you have other tasks to do, be sure to add them to the boards. | 1.0 | boards - Your boards look like you have completed the project. This may be true. If you have other tasks to do, be sure to add them to the boards. | non_priority | boards your boards look like you have completed the project this may be true if you have other tasks to do be sure to add them to the boards | 0 |
300,291 | 25,956,621,321 | IssuesEvent | 2022-12-18 10:14:37 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | kv/kvserver: TestReplicaClosedTimestamp failed | C-test-failure O-robot branch-master | kv/kvserver.TestReplicaClosedTimestamp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8008755?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8008755?buildTab=artifacts#/) on master @ [93ed65565357538c9048ff45c878a493f2ed9b45](https://github.com/cockroachdb/cockroach/commits/93ed65565357538c9048ff45c878a493f2ed9b45):
```
=== RUN TestReplicaClosedTimestamp
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/33e1d369c27b9c01b2b6009c561815a3/logTestReplicaClosedTimestamp736341010
test_log_scope.go:79: use -show-logs to present logs inline
=== CONT TestReplicaClosedTimestamp
replica_closedts_internal_test.go:575: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/33e1d369c27b9c01b2b6009c561815a3/logTestReplicaClosedTimestamp736341010
--- FAIL: TestReplicaClosedTimestamp (0.12s)
=== RUN TestReplicaClosedTimestamp/sidetrans_closed_ahead
replica_closedts_internal_test.go:572:
Error Trace: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2167/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/kv/kvserver/kvserver_test_/kvserver_test.runfiles/com_github_cockroachdb_cockroach/pkg/kv/kvserver/replica_closedts_internal_test.go:572
Error: Not equal:
expected: hlc.Timestamp{WallTime:2, Logical:0, Synthetic:false}
actual : hlc.Timestamp{WallTime:1, Logical:0, Synthetic:false}
Diff:
--- Expected
+++ Actual
@@ -1,3 +1,3 @@
(hlc.Timestamp) {
- WallTime: (int64) 2,
+ WallTime: (int64) 1,
Logical: (int32) 0,
Test: TestReplicaClosedTimestamp/sidetrans_closed_ahead
--- FAIL: TestReplicaClosedTimestamp/sidetrans_closed_ahead (0.04s)
```
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/replication
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestReplicaClosedTimestamp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | kv/kvserver: TestReplicaClosedTimestamp failed - kv/kvserver.TestReplicaClosedTimestamp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8008755?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8008755?buildTab=artifacts#/) on master @ [93ed65565357538c9048ff45c878a493f2ed9b45](https://github.com/cockroachdb/cockroach/commits/93ed65565357538c9048ff45c878a493f2ed9b45):
```
=== RUN TestReplicaClosedTimestamp
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/33e1d369c27b9c01b2b6009c561815a3/logTestReplicaClosedTimestamp736341010
test_log_scope.go:79: use -show-logs to present logs inline
=== CONT TestReplicaClosedTimestamp
replica_closedts_internal_test.go:575: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/33e1d369c27b9c01b2b6009c561815a3/logTestReplicaClosedTimestamp736341010
--- FAIL: TestReplicaClosedTimestamp (0.12s)
=== RUN TestReplicaClosedTimestamp/sidetrans_closed_ahead
replica_closedts_internal_test.go:572:
Error Trace: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2167/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/kv/kvserver/kvserver_test_/kvserver_test.runfiles/com_github_cockroachdb_cockroach/pkg/kv/kvserver/replica_closedts_internal_test.go:572
Error: Not equal:
expected: hlc.Timestamp{WallTime:2, Logical:0, Synthetic:false}
actual : hlc.Timestamp{WallTime:1, Logical:0, Synthetic:false}
Diff:
--- Expected
+++ Actual
@@ -1,3 +1,3 @@
(hlc.Timestamp) {
- WallTime: (int64) 2,
+ WallTime: (int64) 1,
Logical: (int32) 0,
Test: TestReplicaClosedTimestamp/sidetrans_closed_ahead
--- FAIL: TestReplicaClosedTimestamp/sidetrans_closed_ahead (0.04s)
```
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/replication
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestReplicaClosedTimestamp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_priority | kv kvserver testreplicaclosedtimestamp failed kv kvserver testreplicaclosedtimestamp with on master run testreplicaclosedtimestamp test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline cont testreplicaclosedtimestamp replica closedts internal test go test log scope end test logs left over in artifacts tmp tmp fail testreplicaclosedtimestamp run testreplicaclosedtimestamp sidetrans closed ahead replica closedts internal test go error trace home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg kv kvserver kvserver test kvserver test runfiles com github cockroachdb cockroach pkg kv kvserver replica closedts internal test go error not equal expected hlc timestamp walltime logical synthetic false actual hlc timestamp walltime logical synthetic false diff expected actual hlc timestamp walltime walltime logical test testreplicaclosedtimestamp sidetrans closed ahead fail testreplicaclosedtimestamp sidetrans closed ahead parameters tags bazel gss deadlock help see also cc cockroachdb replication | 0 |
17,180 | 10,617,723,764 | IssuesEvent | 2019-10-12 21:19:19 | cityofaustin/atd-vz-data | https://api.github.com/repos/cityofaustin/atd-vz-data | opened | VZE: Person/Primary Person death_cnt | Need: 2-Should Have Project: Vision Zero Crash Data System Service: Dev Workgroup: VZ | It appears the death count in the location page aggregates data from two tables: primary person and person.
These table have their own death_cnt column, separate from the crash's table.
Two things can be done to fix this:
1. Create two columns that provide revisions for APD for (ie. apd_confirmed_death_count), just like in the crash table in issue #236.
2. Change VZE's Location page to retrieve aggregate data from the crash's table apd columns. | 1.0 | VZE: Person/Primary Person death_cnt - It appears the death count in the location page aggregates data from two tables: primary person and person.
These table have their own death_cnt column, separate from the crash's table.
Two things can be done to fix this:
1. Create two columns that provide revisions for APD for (ie. apd_confirmed_death_count), just like in the crash table in issue #236.
2. Change VZE's Location page to retrieve aggregate data from the crash's table apd columns. | non_priority | vze person primary person death cnt it appears the death count in the location page aggregates data from two tables primary person and person these table have their own death cnt column separate from the crash s table two things can be done to fix this create two columns that provide revisions for apd for ie apd confirmed death count just like in the crash table in issue change vze s location page to retrieve aggregate data from the crash s table apd columns | 0 |
17,963 | 23,973,941,737 | IssuesEvent | 2022-09-13 09:58:56 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Update the Note in Runbook types section | automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2 | Note in [Runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) section needs update. It also needs to provide a reference to [Start a child runbook by using a cmdlet](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#start-a-child-runbook-by-using-a-cmdlet) section and explain issues associated with starting a child runbook using the cmdlet. Reference: https://docs.microsoft.com/en-us/answers/questions/918563/index.html
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23c183d0-5012-e2e1-5562-69135b3f6509
* Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814
* Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types)
* Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-child-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha** | 1.0 | Update the Note in Runbook types section - Note in [Runbook types](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types) section needs update. It also needs to provide a reference to [Start a child runbook by using a cmdlet](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#start-a-child-runbook-by-using-a-cmdlet) section and explain issues associated with starting a child runbook using the cmdlet. Reference: https://docs.microsoft.com/en-us/answers/questions/918563/index.html
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23c183d0-5012-e2e1-5562-69135b3f6509
* Version Independent ID: 7f36ff87-e24a-7442-8d42-f621f5391814
* Content: [Create modular runbooks in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-child-runbooks#runbook-types)
* Content Source: [articles/automation/automation-child-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-child-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha** | non_priority | update the note in runbook types section note in section needs update it also needs to provide a reference to section and explain issues associated with starting a child runbook using the cmdlet reference document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha | 0 |
17,031 | 10,593,568,601 | IssuesEvent | 2019-10-09 15:05:45 | prometheus/prometheus | https://api.github.com/repos/prometheus/prometheus | closed | Service discovery errors during startup in v2.13.0 | component/service discovery | ## Bug Report
**What did you do?**
We've upgraded Prometheus from 2.6.1 to 2.13.0.
When Prometheus is starting up there are service discovery errors logged, partly because of context cancellation. Service discovery otherwise works.
Also it seems the config is read twice.
**What did you expect to see?**
No errors during startup.
**What did you see instead? Under which circumstances?**
```
level=info ts=2019-10-08T15:01:57.136Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config/prometheus.yml
level=error ts=2019-10-08T15:01:57.138Z caller=pod.go:85 component="discovery manager scrape" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.138Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=error ts=2019-10-08T15:01:57.138Z caller=node.go:82 component="discovery manager scrape" discovery=k8s role=node msg="node informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.138Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.138Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.139Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.178Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/config/prometheus.yml
level=info ts=2019-10-08T15:01:57.182Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config/prometheus.yml
level=error ts=2019-10-08T15:01:57.183Z caller=pod.go:85 component="discovery manager scrape" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.183Z caller=node.go:82 component="discovery manager scrape" discovery=k8s role=node msg="node informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.183Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.183Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2019-10-08T15:01:57.183Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.184Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.224Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/config/prometheus.yml
```
I checked `prometheus_sd_refresh_duration_seconds` and it doesn't show any abnormal values at startup.
`prometheus_sd_refresh_failures_total` stays at 4 after startup.
The nodes were doing a fairly long WAL replay before this.
**Environment**
* System information:
We are running the equivalent of the upstream Docker image, but changing the user.
* Prometheus version:
```
prometheus, version 2.13.0 (branch: HEAD, revision: 6ea4252299f542669aca11860abc2192bdc7bede)
build user: root@188fafb5b41b
build date: 20191008-10:23:04
go version: go1.13.1
```
| 1.0 | Service discovery errors during startup in v2.13.0 - ## Bug Report
**What did you do?**
We've upgraded Prometheus from 2.6.1 to 2.13.0.
When Prometheus is starting up there are service discovery errors logged, partly because of context cancellation. Service discovery otherwise works.
Also it seems the config is read twice.
**What did you expect to see?**
No errors during startup.
**What did you see instead? Under which circumstances?**
```
level=info ts=2019-10-08T15:01:57.136Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config/prometheus.yml
level=error ts=2019-10-08T15:01:57.138Z caller=pod.go:85 component="discovery manager scrape" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.138Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=error ts=2019-10-08T15:01:57.138Z caller=node.go:82 component="discovery manager scrape" discovery=k8s role=node msg="node informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.138Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.138Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.139Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.178Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/config/prometheus.yml
level=info ts=2019-10-08T15:01:57.182Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config/prometheus.yml
level=error ts=2019-10-08T15:01:57.183Z caller=pod.go:85 component="discovery manager scrape" discovery=k8s role=pod msg="pod informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.183Z caller=node.go:82 component="discovery manager scrape" discovery=k8s role=node msg="node informer unable to sync cache"
level=error ts=2019-10-08T15:01:57.183Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.183Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2019-10-08T15:01:57.183Z caller=refresh.go:78 component="discovery manager scrape" discovery=ec2 msg="Unable to refresh target groups" err="could not describe instances: RequestCanceled: request context canceled\ncaused by: context canceled"
level=info ts=2019-10-08T15:01:57.184Z caller=kubernetes.go:192 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-10-08T15:01:57.224Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/config/prometheus.yml
```
I checked `prometheus_sd_refresh_duration_seconds` and it doesn't show any abnormal values at startup.
`prometheus_sd_refresh_failures_total` stays at 4 after startup.
The nodes were doing a fairly long WAL replay before this.
**Environment**
* System information:
We are running the equivalent of the upstream Docker image, but changing the user.
* Prometheus version:
```
prometheus, version 2.13.0 (branch: HEAD, revision: 6ea4252299f542669aca11860abc2192bdc7bede)
build user: root@188fafb5b41b
build date: 20191008-10:23:04
go version: go1.13.1
```
| non_priority | service discovery errors during startup in bug report what did you do we ve upgraded prometheus from to when prometheus is starting up there are service discovery errors logged partly because of context cancellation service discovery otherwise works also it seems the config is read twice what did you expect to see no errors during startup what did you see instead under which circumstances level info ts caller main go msg loading configuration file filename etc prometheus config prometheus yml level error ts caller pod go component discovery manager scrape discovery role pod msg pod informer unable to sync cache level error ts caller refresh go component discovery manager scrape discovery msg unable to refresh target groups err could not describe instances requestcanceled request context canceled ncaused by context canceled level error ts caller node go component discovery manager scrape discovery role node msg node informer unable to sync cache level error ts caller refresh go component discovery manager scrape discovery msg unable to refresh target groups err could not describe instances requestcanceled request context canceled ncaused by context canceled level info ts caller kubernetes go component discovery manager scrape discovery msg using pod service account via in cluster config level info ts caller kubernetes go component discovery manager scrape discovery msg using pod service account via in cluster config level info ts caller main go msg completed loading of configuration file filename etc prometheus config prometheus yml level info ts caller main go msg loading configuration file filename etc prometheus config prometheus yml level error ts caller pod go component discovery manager scrape discovery role pod msg pod informer unable to sync cache level error ts caller node go component discovery manager scrape discovery role node msg node informer unable to sync cache level error ts caller refresh go component discovery manager scrape discovery msg unable to refresh target groups err could not describe instances requestcanceled request context canceled ncaused by context canceled level info ts caller kubernetes go component discovery manager scrape discovery msg using pod service account via in cluster config level error ts caller refresh go component discovery manager scrape discovery msg unable to refresh target groups err could not describe instances requestcanceled request context canceled ncaused by context canceled level info ts caller kubernetes go component discovery manager scrape discovery msg using pod service account via in cluster config level info ts caller main go msg completed loading of configuration file filename etc prometheus config prometheus yml i checked prometheus sd refresh duration seconds and it doesn t show any abnormal values at startup prometheus sd refresh failures total stays at after startup the nodes were doing a fairly long wal replay before this environment system information we are running the equivalent of the upstream docker image but changing the user prometheus version prometheus version branch head revision build user root build date go version | 0 |
170,107 | 14,240,793,148 | IssuesEvent | 2020-11-18 22:15:40 | matplotlib/matplotlib | https://api.github.com/repos/matplotlib/matplotlib | closed | Mention rasterized option in more methods | Documentation Good first issue | We should mention rasterized in a few places in the docstrings of relavent methods. Ie pcolormesh, contour etc. This option is probably the number one reason I started to use matplotlib, but it seems a lot of people don’t realize it exists. | 1.0 | Mention rasterized option in more methods - We should mention rasterized in a few places in the docstrings of relavent methods. Ie pcolormesh, contour etc. This option is probably the number one reason I started to use matplotlib, but it seems a lot of people don’t realize it exists. | non_priority | mention rasterized option in more methods we should mention rasterized in a few places in the docstrings of relavent methods ie pcolormesh contour etc this option is probably the number one reason i started to use matplotlib but it seems a lot of people don’t realize it exists | 0 |
387,052 | 26,711,326,939 | IssuesEvent | 2023-01-28 00:35:58 | gbowne1/reactsocialnetwork | https://api.github.com/repos/gbowne1/reactsocialnetwork | opened | Fix these minor CSS issues | bug documentation enhancement help wanted good first issue question | Describe the bug
[App]
Fix these minor CSS issues.
CSS Issue(s):
- Footer.css: `<Footer class=Footer-body>`
Error is in browser console. `Error in parsing value for ‘float’. Declaration dropped.`
- App.css <.Register-button>
Error is in browser console. `Error in parsing value for ‘align-items’. Declaration dropped.`
This may be moved from App.css to Register.css
- App.css <.Login-button>
Error is in browser console. `Error in parsing value for ‘align-items’. Declaration dropped.`
This may be moved from App.css to Login.css
To Reproduce
Steps to reproduce the behavior:
May need to `npm i` or `npm install` as I did not include my local node_modules dir in the initial commit & push upload
run `npm start` in the project root to start the dev server. Browser will open once the dev server is started.
Expected behavior
Working CSS selectors
Desktop (please complete the following information):
OS: [e.g. iOS]
Linux, 64bit
Browser [e.g. chrome, safari]
Firefox
Version [e.g. 22]
107
Additional context
| 1.0 | Fix these minor CSS issues - Describe the bug
[App]
Fix these minor CSS issues.
CSS Issue(s):
- Footer.css: `<Footer class=Footer-body>`
Error is in browser console. `Error in parsing value for ‘float’. Declaration dropped.`
- App.css <.Register-button>
Error is in browser console. `Error in parsing value for ‘align-items’. Declaration dropped.`
This may be moved from App.css to Register.css
- App.css <.Login-button>
Error is in browser console. `Error in parsing value for ‘align-items’. Declaration dropped.`
This may be moved from App.css to Login.css
To Reproduce
Steps to reproduce the behavior:
May need to `npm i` or `npm install` as I did not include my local node_modules dir in the initial commit & push upload
run `npm start` in the project root to start the dev server. Browser will open once the dev server is started.
Expected behavior
Working CSS selectors
Desktop (please complete the following information):
OS: [e.g. iOS]
Linux, 64bit
Browser [e.g. chrome, safari]
Firefox
Version [e.g. 22]
107
Additional context
| non_priority | fix these minor css issues describe the bug fix these minor css issues css issue s footer css error is in browser console error in parsing value for ‘float’ declaration dropped app css error is in browser console error in parsing value for ‘align items’ declaration dropped this may be moved from app css to register css app css error is in browser console error in parsing value for ‘align items’ declaration dropped this may be moved from app css to login css to reproduce steps to reproduce the behavior may need to npm i or npm install as i did not include my local node modules dir in the initial commit push upload run npm start in the project root to start the dev server browser will open once the dev server is started expected behavior working css selectors desktop please complete the following information os linux browser firefox version additional context | 0 |
283,158 | 30,889,610,083 | IssuesEvent | 2023-08-04 02:59:12 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2022-4129 (Medium) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33 | Mend: dependency security vulnerability | ## CVE-2022-4129 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel's Layer 2 Tunneling Protocol (L2TP). A missing lock when clearing sk_user_data can lead to a race condition and NULL pointer dereference. A local user could use this flaw to potentially crash the system causing a denial of service.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4129>CVE-2022-4129</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4129">https://www.linuxkernelcves.com/cves/CVE-2022-4129</a></p>
<p>Release Date: 2022-11-28</p>
<p>Fix Resolution: v5.4.231,v5.10.166,v5.15.91</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-4129 (Medium) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33 - ## CVE-2022-4129 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel's Layer 2 Tunneling Protocol (L2TP). A missing lock when clearing sk_user_data can lead to a race condition and NULL pointer dereference. A local user could use this flaw to potentially crash the system causing a denial of service.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4129>CVE-2022-4129</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4129">https://www.linuxkernelcves.com/cves/CVE-2022-4129</a></p>
<p>Release Date: 2022-11-28</p>
<p>Fix Resolution: v5.4.231,v5.10.166,v5.15.91</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linux stable linux stable cve medium severity vulnerability vulnerable libraries linux stable linux stable vulnerability details a flaw was found in the linux kernel s layer tunneling protocol a missing lock when clearing sk user data can lead to a race condition and null pointer dereference a local user could use this flaw to potentially crash the system causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
28,271 | 4,087,278,908 | IssuesEvent | 2016-06-01 09:28:39 | fossasia/open-event-webapp | https://api.github.com/repos/fossasia/open-event-webapp | closed | Best way to integrate the webapp with gentelella | design | Hi ,
As [gentelella](https://github.com/puikinsh/gentelella) was accepted as a theme by everyone , What should be the best way for the integration ?
As gentelella is jquery based , it does not follow MVC architecture of Angular JS . Hence a best modular approach is needed .
The snapshot is the MVC architecture we follow in the webapp .

| 1.0 | Best way to integrate the webapp with gentelella - Hi ,
As [gentelella](https://github.com/puikinsh/gentelella) was accepted as a theme by everyone , What should be the best way for the integration ?
As gentelella is jquery based , it does not follow MVC architecture of Angular JS . Hence a best modular approach is needed .
The snapshot is the MVC architecture we follow in the webapp .

| non_priority | best way to integrate the webapp with gentelella hi as was accepted as a theme by everyone what should be the best way for the integration as gentelella is jquery based it does not follow mvc architecture of angular js hence a best modular approach is needed the snapshot is the mvc architecture we follow in the webapp | 0 |
7,625 | 10,742,076,520 | IssuesEvent | 2019-10-29 21:38:37 | ennukee/aniupdater | https://api.github.com/repos/ennukee/aniupdater | reopened | Enhance landing screen | UX big deal enhancement release requirement | Provide more on the initial landing screen (where you input an anilist token) so that it's more user friendly to first-time visitors.
(Also allow Enter to be used to submit tokens)
Criteria
- [ ] Update "submit" button to be less... odd?
- [ ] Add logo from #19 to be bigger on this screen only
- [ ] Information about the app and its purpose (why no MAL too maybe) | 1.0 | Enhance landing screen - Provide more on the initial landing screen (where you input an anilist token) so that it's more user friendly to first-time visitors.
(Also allow Enter to be used to submit tokens)
Criteria
- [ ] Update "submit" button to be less... odd?
- [ ] Add logo from #19 to be bigger on this screen only
- [ ] Information about the app and its purpose (why no MAL too maybe) | non_priority | enhance landing screen provide more on the initial landing screen where you input an anilist token so that it s more user friendly to first time visitors also allow enter to be used to submit tokens criteria update submit button to be less odd add logo from to be bigger on this screen only information about the app and its purpose why no mal too maybe | 0 |
172,034 | 14,349,553,833 | IssuesEvent | 2020-11-29 17:04:38 | SAP/fundamental-ngx | https://api.github.com/repos/SAP/fundamental-ngx | opened | documentation versions are missing | documentation | on https://sap.github.io/fundamental-ngx/ and https://fundamental-ngx.netlify.app/ the last documentation is for 0.21.0. We are on 0.25 rc right now. We need to find a way to automate that.
Meanwhile we need to add the missing versions. | 1.0 | documentation versions are missing - on https://sap.github.io/fundamental-ngx/ and https://fundamental-ngx.netlify.app/ the last documentation is for 0.21.0. We are on 0.25 rc right now. We need to find a way to automate that.
Meanwhile we need to add the missing versions. | non_priority | documentation versions are missing on and the last documentation is for we are on rc right now we need to find a way to automate that meanwhile we need to add the missing versions | 0 |
350,229 | 24,974,119,725 | IssuesEvent | 2022-11-02 05:45:50 | cse110-fa22-group28/cse110-fa22-group28 | https://api.github.com/repos/cse110-fa22-group28/cse110-fa22-group28 | closed | User Stories Document | documentation | # Administrative or Organizational Tasks
What is the purpose of this task?
Create a document with user stories - how a user would interact with the app while using it based on their persona. Here is a link with more [information](https://en.wikipedia.org/wiki/User_story) :)
Steps to complete the task:
- [x] Do some research on User Stories Documents and find a template/sample if you can
- [x] Define 2-3 stories
- [x] Compile these in a document and format it based on your research, the sample, and details in the link given in the assignment (and above)
| 1.0 | User Stories Document - # Administrative or Organizational Tasks
What is the purpose of this task?
Create a document with user stories - how a user would interact with the app while using it based on their persona. Here is a link with more [information](https://en.wikipedia.org/wiki/User_story) :)
Steps to complete the task:
- [x] Do some research on User Stories Documents and find a template/sample if you can
- [x] Define 2-3 stories
- [x] Compile these in a document and format it based on your research, the sample, and details in the link given in the assignment (and above)
| non_priority | user stories document administrative or organizational tasks what is the purpose of this task create a document with user stories how a user would interact with the app while using it based on their persona here is a link with more steps to complete the task do some research on user stories documents and find a template sample if you can define stories compile these in a document and format it based on your research the sample and details in the link given in the assignment and above | 0 |
120,978 | 10,144,841,604 | IssuesEvent | 2019-08-05 00:50:31 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Enable node e2e tests on Windows | area/test kind/feature lifecycle/rotten sig/windows | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Currently there is no node e2e tests for Windows. Since the Windows features in kubernetes are going to GA, it is important to set up the job on PRs, to prevent build failure and reggressions for Windows.
**Why is this needed**:
node e2e tests are essential for Windows containers. We should probably add it after GA (e.g. v1.15).
/sig windows
/assign @PatrickLang | 1.0 | Enable node e2e tests on Windows - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Currently there is no node e2e tests for Windows. Since the Windows features in kubernetes are going to GA, it is important to set up the job on PRs, to prevent build failure and reggressions for Windows.
**Why is this needed**:
node e2e tests are essential for Windows containers. We should probably add it after GA (e.g. v1.15).
/sig windows
/assign @PatrickLang | non_priority | enable node tests on windows what would you like to be added currently there is no node tests for windows since the windows features in kubernetes are going to ga it is important to set up the job on prs to prevent build failure and reggressions for windows why is this needed node tests are essential for windows containers we should probably add it after ga e g sig windows assign patricklang | 0 |
65,191 | 12,539,245,069 | IssuesEvent | 2020-06-05 08:16:05 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | closed | The vscode workspace OBR is not taking the custom maven repository into consideration | bug vscode | The workspace obr build is failing because it is not taking any notice of the Java Maven repository or the Galasa local repository settings, I had mine set to /Users/mikebyls/git/galasa/m2/repository, I would have expected to see a --settings /Users/mikebyls/git/galasa/m2/settings.xml from the Java Maven extension:-
```
> Executing task in folder dev.galasa.zos.manager: mvn install -f /Users/mikebyls/.vscode/extensions/galasa.galasa-plugin-0.8.2/galasa-workspace/obr <
[INFO] Scanning for projects...
[WARNING] The POM for dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT is missing, no dependency information available
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin dev.galasa:galasa-maven-plugin:0.9.0-SNAPSHOT or one of its dependencies could not be resolved: Could not find artifact dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT @
[ERROR] Unknown packaging: galasa-obr @ line 7, column 13
@
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project dev.galasa:vscode.workspace.obr:0.9.0-SNAPSHOT (/Users/mikebyls/.vscode/extensions/galasa.galasa-plugin-0.8.2/galasa-workspace/obr/pom.xml) has 2 errors
[ERROR] Unresolveable build extension: Plugin dev.galasa:galasa-maven-plugin:0.9.0-SNAPSHOT or one of its dependencies could not be resolved: Could not find artifact dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT -> [Help 2]
[ERROR] Unknown packaging: galasa-obr @ line 7, column 13
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/PluginManagerException
The terminal process terminated with exit code: 1
Terminal will be reused by tasks, press any key to close it.
```
| 1.0 | The vscode workspace OBR is not taking the custom maven repository into consideration - The workspace obr build is failing because it is not taking any notice of the Java Maven repository or the Galasa local repository settings, I had mine set to /Users/mikebyls/git/galasa/m2/repository, I would have expected to see a --settings /Users/mikebyls/git/galasa/m2/settings.xml from the Java Maven extension:-
```
> Executing task in folder dev.galasa.zos.manager: mvn install -f /Users/mikebyls/.vscode/extensions/galasa.galasa-plugin-0.8.2/galasa-workspace/obr <
[INFO] Scanning for projects...
[WARNING] The POM for dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT is missing, no dependency information available
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin dev.galasa:galasa-maven-plugin:0.9.0-SNAPSHOT or one of its dependencies could not be resolved: Could not find artifact dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT @
[ERROR] Unknown packaging: galasa-obr @ line 7, column 13
@
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project dev.galasa:vscode.workspace.obr:0.9.0-SNAPSHOT (/Users/mikebyls/.vscode/extensions/galasa.galasa-plugin-0.8.2/galasa-workspace/obr/pom.xml) has 2 errors
[ERROR] Unresolveable build extension: Plugin dev.galasa:galasa-maven-plugin:0.9.0-SNAPSHOT or one of its dependencies could not be resolved: Could not find artifact dev.galasa:galasa-maven-plugin:jar:0.9.0-SNAPSHOT -> [Help 2]
[ERROR] Unknown packaging: galasa-obr @ line 7, column 13
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/PluginManagerException
The terminal process terminated with exit code: 1
Terminal will be reused by tasks, press any key to close it.
```
| non_priority | the vscode workspace obr is not taking the custom maven repository into consideration the workspace obr build is failing because it is not taking any notice of the java maven repository or the galasa local repository settings i had mine set to users mikebyls git galasa repository i would have expected to see a settings users mikebyls git galasa settings xml from the java maven extension executing task in folder dev galasa zos manager mvn install f users mikebyls vscode extensions galasa galasa plugin galasa workspace obr scanning for projects the pom for dev galasa galasa maven plugin jar snapshot is missing no dependency information available some problems were encountered while processing the poms unresolveable build extension plugin dev galasa galasa maven plugin snapshot or one of its dependencies could not be resolved could not find artifact dev galasa galasa maven plugin jar snapshot unknown packaging galasa obr line column the build could not read project the project dev galasa vscode workspace obr snapshot users mikebyls vscode extensions galasa galasa plugin galasa workspace obr pom xml has errors unresolveable build extension plugin dev galasa galasa maven plugin snapshot or one of its dependencies could not be resolved could not find artifact dev galasa galasa maven plugin jar snapshot unknown packaging galasa obr line column to see the full stack trace of the errors re run maven with the e switch re run maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles the terminal process terminated with exit code terminal will be reused by tasks press any key to close it | 0 |
90,161 | 18,068,307,588 | IssuesEvent | 2021-09-20 22:00:11 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | closed | Deprecate `LightningLoggerBase.close` | enhancement good first issue let's do it! refactors / code health logger deprecation | ## Proposed refactoring or deprecation
<!-- A clear and concise description of the code improvement -->
### Motivation
This is a follow up to https://github.com/PyTorchLightning/pytorch-lightning/discussions/9004#discussioncomment-1212966
and
https://github.com/PyTorchLightning/pytorch-lightning/issues/9037
The base logger API has `close` defined
https://github.com/PyTorchLightning/pytorch-lightning/blob/089ae9b3e82ddc31942e315294e31e48c0a899db/pytorch_lightning/loggers/base.py#L312-L314
This is only implemented on https://github.com/PyTorchLightning/pytorch-lightning/blob/089ae9b3e82ddc31942e315294e31e48c0a899db/pytorch_lightning/loggers/test_tube.py#L197-L204
Given the test tube logger has since been deprecated, we can also deprecate this method off the base API as it's very unclear what the difference is between save/close/finalize currently.
This function is never called by the Trainer either, so deprecating it from the base logger API has minimal changes for users.
<!-- Please outline the motivation for the proposal. If this is related to another GitHub issue, please link here too -->
### Pitch
- Deprecate `close` off the base API in v1.5
- Remove it from the API in v1.7
<!-- A clear and concise description of what you want to happen. -->
### Additional context
<!-- Add any other context or screenshots here. -->
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
<sub>
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
</sub>
| 1.0 | Deprecate `LightningLoggerBase.close` - ## Proposed refactoring or deprecation
<!-- A clear and concise description of the code improvement -->
### Motivation
This is a follow up to https://github.com/PyTorchLightning/pytorch-lightning/discussions/9004#discussioncomment-1212966
and
https://github.com/PyTorchLightning/pytorch-lightning/issues/9037
The base logger API has `close` defined
https://github.com/PyTorchLightning/pytorch-lightning/blob/089ae9b3e82ddc31942e315294e31e48c0a899db/pytorch_lightning/loggers/base.py#L312-L314
This is only implemented on https://github.com/PyTorchLightning/pytorch-lightning/blob/089ae9b3e82ddc31942e315294e31e48c0a899db/pytorch_lightning/loggers/test_tube.py#L197-L204
Given the test tube logger has since been deprecated, we can also deprecate this method off the base API as it's very unclear what the difference is between save/close/finalize currently.
This function is never called by the Trainer either, so deprecating it from the base logger API has minimal changes for users.
<!-- Please outline the motivation for the proposal. If this is related to another GitHub issue, please link here too -->
### Pitch
- Deprecate `close` off the base API in v1.5
- Remove it from the API in v1.7
<!-- A clear and concise description of what you want to happen. -->
### Additional context
<!-- Add any other context or screenshots here. -->
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
<sub>
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
</sub>
| non_priority | deprecate lightningloggerbase close proposed refactoring or deprecation motivation this is a follow up to and the base logger api has close defined this is only implemented on given the test tube logger has since been deprecated we can also deprecate this method off the base api as it s very unclear what the difference is between save close finalize currently this function is never called by the trainer either so deprecating it from the base logger api has minimal changes for users pitch deprecate close off the base api in remove it from the api in additional context if you enjoy lightning check out our other projects ⚡ machine learning metrics for distributed scalable pytorch applications the fastest way to get a lightning baseline a collection of tasks for fast prototyping baselining finetuning and solving problems with deep learning pretrained sota deep learning models callbacks and more for research and production with pytorch lightning and pytorch flexible interface for high performance research using sota transformers leveraging pytorch lightning transformers and hydra | 0 |
165,460 | 20,591,868,084 | IssuesEvent | 2022-03-05 00:25:16 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Question: are APIs in System.Security.Cryptography.Cng and System.Security.Cryptography.Pkcs available in Mono? | question area-System.Security | Hi, we've called APIs in those two packages in net5.0 code path and they worked correctly.
I'm wondering if the APIs work if calling them in Mono from non-Windows platforms?
I think that is,
1. Does the full framework code path of those APIs in the two packages depend on any windows platform specific APIs?
2. If all APIs are supported on Mono, may I know if all tests are add for Mono?
Thanks! | True | Question: are APIs in System.Security.Cryptography.Cng and System.Security.Cryptography.Pkcs available in Mono? - Hi, we've called APIs in those two packages in net5.0 code path and they worked correctly.
I'm wondering if the APIs work if calling them in Mono from non-Windows platforms?
I think that is,
1. Does the full framework code path of those APIs in the two packages depend on any windows platform specific APIs?
2. If all APIs are supported on Mono, may I know if all tests are add for Mono?
Thanks! | non_priority | question are apis in system security cryptography cng and system security cryptography pkcs available in mono hi we ve called apis in those two packages in code path and they worked correctly i m wondering if the apis work if calling them in mono from non windows platforms i think that is does the full framework code path of those apis in the two packages depend on any windows platform specific apis if all apis are supported on mono may i know if all tests are add for mono thanks | 0 |
176,368 | 14,579,717,194 | IssuesEvent | 2020-12-18 07:53:22 | qudgns5129/pill_classification | https://api.github.com/repos/qudgns5129/pill_classification | closed | 문제 정의 | documentation | 문제 배경 : 대형 병원에서 환자의 호전 상태에 따라 처방이 바뀌거나 하는 일이 발생하면, 조제되었으나 투약되지 않은 약은 회수하게 된다. 약사는 이렇게 회수된 수백 종류의 알약을 재분류하는 작업을 하고 있다.
문제 정의 : 이미지의 색상 정보를 가지고 약품명 예측하기, 글자 인식 X -> 인간의 직관과 비슷한 모델을 만들기 위함(보다 직관적인)
< 모델 구조 >
detection + classification
전처리 단계 또는 모델 내 레이어 : 이미지에서 알약 detection 하기
INPUT : 이미지 RGB 3채널의 픽셀값
OUTPUT : 약품명
OUTPUT이 여러 개인 경우 고려 O -> multi-label classification
진행 순서 : ① 비교논문 서치 ② 전처리 방식 제고 ③ adobe 연구원의 antialiased-cnn 모델 적용하기
abode 모델 링크 : https://github.com/adobe/antialiased-cnns | 1.0 | 문제 정의 - 문제 배경 : 대형 병원에서 환자의 호전 상태에 따라 처방이 바뀌거나 하는 일이 발생하면, 조제되었으나 투약되지 않은 약은 회수하게 된다. 약사는 이렇게 회수된 수백 종류의 알약을 재분류하는 작업을 하고 있다.
문제 정의 : 이미지의 색상 정보를 가지고 약품명 예측하기, 글자 인식 X -> 인간의 직관과 비슷한 모델을 만들기 위함(보다 직관적인)
< 모델 구조 >
detection + classification
전처리 단계 또는 모델 내 레이어 : 이미지에서 알약 detection 하기
INPUT : 이미지 RGB 3채널의 픽셀값
OUTPUT : 약품명
OUTPUT이 여러 개인 경우 고려 O -> multi-label classification
진행 순서 : ① 비교논문 서치 ② 전처리 방식 제고 ③ adobe 연구원의 antialiased-cnn 모델 적용하기
abode 모델 링크 : https://github.com/adobe/antialiased-cnns | non_priority | 문제 정의 문제 배경 대형 병원에서 환자의 호전 상태에 따라 처방이 바뀌거나 하는 일이 발생하면 조제되었으나 투약되지 않은 약은 회수하게 된다 약사는 이렇게 회수된 수백 종류의 알약을 재분류하는 작업을 하고 있다 문제 정의 이미지의 색상 정보를 가지고 약품명 예측하기 글자 인식 x 인간의 직관과 비슷한 모델을 만들기 위함 보다 직관적인 detection classification 전처리 단계 또는 모델 내 레이어 이미지에서 알약 detection 하기 input 이미지 rgb 픽셀값 output 약품명 output이 여러 개인 경우 고려 o multi label classification 진행 순서 ① 비교논문 서치 ② 전처리 방식 제고 ③ adobe 연구원의 antialiased cnn 모델 적용하기 abode 모델 링크 | 0 |
221,307 | 17,011,976,289 | IssuesEvent | 2021-07-02 06:36:19 | a8119037/isfw2 | https://api.github.com/repos/a8119037/isfw2 | closed | グループ分けアプリの機能議論 | documentation | # アイデア出し
これを元にモデルを組んでも良いかも
## 条件を付けてグループ分けができる
- 男女のバランスを設定して分ける
- グループの人数を設定してランダムに分ける
- 特定の人(例えば4章担当の人)を各グループに分けて、他の人をランダムで割りふる
## 余談
この議論はgithub discussionに移行した方がいいのかも | 1.0 | グループ分けアプリの機能議論 - # アイデア出し
これを元にモデルを組んでも良いかも
## 条件を付けてグループ分けができる
- 男女のバランスを設定して分ける
- グループの人数を設定してランダムに分ける
- 特定の人(例えば4章担当の人)を各グループに分けて、他の人をランダムで割りふる
## 余談
この議論はgithub discussionに移行した方がいいのかも | non_priority | グループ分けアプリの機能議論 アイデア出し これを元にモデルを組んでも良いかも 条件を付けてグループ分けができる 男女のバランスを設定して分ける グループの人数を設定してランダムに分ける 特定の人( )を各グループに分けて、他の人をランダムで割りふる 余談 この議論はgithub discussionに移行した方がいいのかも | 0 |
32,298 | 4,761,189,095 | IssuesEvent | 2016-10-25 07:17:09 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | MapQueryEngineImpl_queryLocalPartition_resultSizeLimitTest.checkResultSize_limitExceeded & checkResultSize_limitNotExceeded | Team: Core Type: Test-Failure | ```
java.lang.AssertionError: Expected exception: com.hazelcast.map.QueryResultSizeExceededException
at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:32)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:88)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast/1116/testReport/junit/com.hazelcast.map.impl.query/MapQueryEngineImpl_queryLocalPartition_resultSizeLimitTest/checkResultSize_limitExceeded/
```
This exception has been thrown to prevent an OOME on this Hazelcast instance. An OOME might occur when a query collects large data sets from the whole cluster, e.g. by calling IMap.values(), IMap.keySet() or IMap.entrySet(). See GroupProperty.QUERY_RESULT_SIZE_LIMIT for further details. The configured query result size limit is 104223 items. Result size exceeded in local pre-check.
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast/1116/testReport/junit/com.hazelcast.map.impl.query/MapQueryEngineImpl_queryLocalPartitions_resultSizeLimitTest/checkResultSize_limitNotExceeded/ | 1.0 | MapQueryEngineImpl_queryLocalPartition_resultSizeLimitTest.checkResultSize_limitExceeded & checkResultSize_limitNotExceeded - ```
java.lang.AssertionError: Expected exception: com.hazelcast.map.QueryResultSizeExceededException
at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:32)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:88)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast/1116/testReport/junit/com.hazelcast.map.impl.query/MapQueryEngineImpl_queryLocalPartition_resultSizeLimitTest/checkResultSize_limitExceeded/
```
This exception has been thrown to prevent an OOME on this Hazelcast instance. An OOME might occur when a query collects large data sets from the whole cluster, e.g. by calling IMap.values(), IMap.keySet() or IMap.entrySet(). See GroupProperty.QUERY_RESULT_SIZE_LIMIT for further details. The configured query result size limit is 104223 items. Result size exceeded in local pre-check.
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast/1116/testReport/junit/com.hazelcast.map.impl.query/MapQueryEngineImpl_queryLocalPartitions_resultSizeLimitTest/checkResultSize_limitNotExceeded/ | non_priority | mapqueryengineimpl querylocalpartition resultsizelimittest checkresultsize limitexceeded checkresultsize limitnotexceeded java lang assertionerror expected exception com hazelcast map queryresultsizeexceededexception at org junit internal runners statements expectexception evaluate expectexception java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java this exception has been thrown to prevent an oome on this hazelcast instance an oome might occur when a query collects large data sets from the whole cluster e g by calling imap values imap keyset or imap entryset see groupproperty query result size limit for further details the configured query result size limit is items result size exceeded in local pre check | 0 |
117,938 | 9,965,874,866 | IssuesEvent | 2019-07-08 09:46:31 | MoArtis/ResidentEvilSeamlessHdProject | https://api.github.com/repos/MoArtis/ResidentEvilSeamlessHdProject | closed | R21801 mask problem | 🎮 to be tested 💥 broken texture 🙎♀️🏙 Resident Evil 3 | It's one of the rooms where we cant' wildcard the mask.

The mask can have 2 different tlut : 338ef6c05709e506 or 91fbb229c7fa0f59. That is also the case for R21807. Of course the masks are unique for each room.
I guess you have updated Dolphin on the Gdrive so I should have the latest version. | 1.0 | R21801 mask problem - It's one of the rooms where we cant' wildcard the mask.

The mask can have 2 different tlut : 338ef6c05709e506 or 91fbb229c7fa0f59. That is also the case for R21807. Of course the masks are unique for each room.
I guess you have updated Dolphin on the Gdrive so I should have the latest version. | non_priority | mask problem it s one of the rooms where we cant wildcard the mask the mask can have different tlut or that is also the case for of course the masks are unique for each room i guess you have updated dolphin on the gdrive so i should have the latest version | 0 |
238,327 | 18,238,908,263 | IssuesEvent | 2021-10-01 10:24:41 | appsmithorg/appsmith-docs | https://api.github.com/repos/appsmithorg/appsmith-docs | opened | How to group multiple widgets on Appsmith | documentation good first issue hacktoberfest widget | Please make sure that your documentation has the following -
1. An Example app that shows the required widget.
2. Add screenshots for a better understanding of the topic.
3. Add code snippets wherever necessary.
For documentation guidelines, click [here.](https://github.com/appsmithorg/appsmith/blob/release/contributions/docs/CONTRIBUTING.md) | 1.0 | How to group multiple widgets on Appsmith - Please make sure that your documentation has the following -
1. An Example app that shows the required widget.
2. Add screenshots for a better understanding of the topic.
3. Add code snippets wherever necessary.
For documentation guidelines, click [here.](https://github.com/appsmithorg/appsmith/blob/release/contributions/docs/CONTRIBUTING.md) | non_priority | how to group multiple widgets on appsmith please make sure that your documentation has the following an example app that shows the required widget add screenshots for a better understanding of the topic add code snippets wherever necessary for documentation guidelines click | 0 |
467 | 2,534,107,957 | IssuesEvent | 2015-01-24 16:14:34 | numixproject/numix-icon-theme-circle | https://api.github.com/repos/numixproject/numix-icon-theme-circle | closed | Icon for UberWriter | hardcoded | Name=UberWriter
Icon=/opt/extras.ubuntu.com/uberwriter/share/uberwriter/media/uberwriter.svg

| 1.0 | Icon for UberWriter - Name=UberWriter
Icon=/opt/extras.ubuntu.com/uberwriter/share/uberwriter/media/uberwriter.svg

| non_priority | icon for uberwriter name uberwriter icon opt extras ubuntu com uberwriter share uberwriter media uberwriter svg | 0 |
110,767 | 16,988,568,352 | IssuesEvent | 2021-06-30 17:13:14 | MicrosoftDocs/windows-itpro-docs | https://api.github.com/repos/MicrosoftDocs/windows-itpro-docs | closed | [Feedback & Questions] BitLocker: How to enable Network Unlock (https://docs.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-how-to-enable-network-unlock) | bitlocker security | Hello Bitlocker Experts,
I'm new to Bitlocker and I wanna have a deeper understanding on how the "Network Unlock" work.
Made reference to the:
1) BitLocker: How to enable Network Unlock
2) [MS-NKPU]: Network Key Protector Unlock Protocol
and draw a very simple diagram to try to get a more clear understanding:

It would be nice if Microsoft could provide an official diagram to depict the whole unlock process in details.
Questions:
1. Are the "Network Key" in the "How-to" and the "Client Key" in the MS-NKPU both refer to the same key?
2. Seems the Network Unlock Provider would simply decrypt the Client / Network Keys & the Session Keys sent from clients by using private keys, encrypt the Client / Network Keys using the Session Keys, and then send the encrypted Client / Network Keys back to clients via DHCP replies. The Network Unlock Provider would return the very same Client / Network Keys, the only difference is, it will be encrypted by the Session Keys sent by clients rather than using private keys. Am I correct?
3. If my understand is correct, then, seems the primary purpose for the "Network Unlock" is to check if user connected to a trusted network, and if so, Bitlocker will automatically unlock the locked drives. Is it correct?
4. When using the Bitlocker Network Unlock, all key materials for decrypting the data are stored in clients' machines only, including the: 1.) Client / Network Keys; 2) IK in TPM; 3) VMK; and 4) FVEK.
Looking forward for your reply.
Thank you! | True | [Feedback & Questions] BitLocker: How to enable Network Unlock (https://docs.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-how-to-enable-network-unlock) - Hello Bitlocker Experts,
I'm new to Bitlocker and I wanna have a deeper understanding on how the "Network Unlock" work.
Made reference to the:
1) BitLocker: How to enable Network Unlock
2) [MS-NKPU]: Network Key Protector Unlock Protocol
and draw a very simple diagram to try to get a more clear understanding:

It would be nice if Microsoft could provide an official diagram to depict the whole unlock process in details.
Questions:
1. Are the "Network Key" in the "How-to" and the "Client Key" in the MS-NKPU both refer to the same key?
2. Seems the Network Unlock Provider would simply decrypt the Client / Network Keys & the Session Keys sent from clients by using private keys, encrypt the Client / Network Keys using the Session Keys, and then send the encrypted Client / Network Keys back to clients via DHCP replies. The Network Unlock Provider would return the very same Client / Network Keys, the only difference is, it will be encrypted by the Session Keys sent by clients rather than using private keys. Am I correct?
3. If my understand is correct, then, seems the primary purpose for the "Network Unlock" is to check if user connected to a trusted network, and if so, Bitlocker will automatically unlock the locked drives. Is it correct?
4. When using the Bitlocker Network Unlock, all key materials for decrypting the data are stored in clients' machines only, including the: 1.) Client / Network Keys; 2) IK in TPM; 3) VMK; and 4) FVEK.
Looking forward for your reply.
Thank you! | non_priority | bitlocker how to enable network unlock hello bitlocker experts i m new to bitlocker and i wanna have a deeper understanding on how the network unlock work made reference to the bitlocker how to enable network unlock network key protector unlock protocol and draw a very simple diagram to try to get a more clear understanding it would be nice if microsoft could provide an official diagram to depict the whole unlock process in details questions are the network key in the how to and the client key in the ms nkpu both refer to the same key seems the network unlock provider would simply decrypt the client network keys the session keys sent from clients by using private keys encrypt the client network keys using the session keys and then send the encrypted client network keys back to clients via dhcp replies the network unlock provider would return the very same client network keys the only difference is it will be encrypted by the session keys sent by clients rather than using private keys am i correct if my understand is correct then seems the primary purpose for the network unlock is to check if user connected to a trusted network and if so bitlocker will automatically unlock the locked drives is it correct when using the bitlocker network unlock all key materials for decrypting the data are stored in clients machines only including the client network keys ik in tpm vmk and fvek looking forward for your reply thank you | 0 |
44,515 | 5,842,080,295 | IssuesEvent | 2017-05-10 04:07:58 | openMF/community-app | https://api.github.com/repos/openMF/community-app | closed | Side nav is not scrolling with the page in firefox browser | design gsoc p2 reskin | As you scroll down the page in community-app, the side-nav remains on top which should not happen.

This could be observed in the following link:
(https://demo.openmf.org/reskin/#/users/)
| 1.0 | Side nav is not scrolling with the page in firefox browser - As you scroll down the page in community-app, the side-nav remains on top which should not happen.

This could be observed in the following link:
(https://demo.openmf.org/reskin/#/users/)
| non_priority | side nav is not scrolling with the page in firefox browser as you scroll down the page in community app the side nav remains on top which should not happen this could be observed in the following link | 0 |
267,914 | 20,250,949,355 | IssuesEvent | 2022-02-14 17:49:20 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | NOD: Update tech documentation | documentation vsa vsa-claims-appeals NOD | ## Description
- Document what a new team member would want to know about this app
- What were some choices that were made that might require explanation?
- What are some things we'd make better if we had more time?
- If something went wrong how might we investigate?
- Is there anything we should update in our product outline?
- What are good staging test users?
## Tasks
- [ ] Write engineering documentation for the Notice of Disagreement form, including the key points above
- [ ] Get team members to review
## Definition of done
- [ ] All tasks complete | 1.0 | NOD: Update tech documentation - ## Description
- Document what a new team member would want to know about this app
- What were some choices that were made that might require explanation?
- What are some things we'd make better if we had more time?
- If something went wrong how might we investigate?
- Is there anything we should update in our product outline?
- What are good staging test users?
## Tasks
- [ ] Write engineering documentation for the Notice of Disagreement form, including the key points above
- [ ] Get team members to review
## Definition of done
- [ ] All tasks complete | non_priority | nod update tech documentation description document what a new team member would want to know about this app what were some choices that were made that might require explanation what are some things we d make better if we had more time if something went wrong how might we investigate is there anything we should update in our product outline what are good staging test users tasks write engineering documentation for the notice of disagreement form including the key points above get team members to review definition of done all tasks complete | 0 |
278,807 | 30,702,405,196 | IssuesEvent | 2023-07-27 01:27:21 | Trinadh465/linux-4.1.15_CVE-2022-45934 | https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2022-45934 | closed | CVE-2016-9685 (Medium) detected in linuxlinux-4.6 - autoclosed | Mend: dependency security vulnerability | ## CVE-2016-9685 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2022-45934/commit/984ffa0a89a9fdf8f38e12acd409f6c51477abd3">984ffa0a89a9fdf8f38e12acd409f6c51477abd3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/xfs_attr_list.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple memory leaks in error paths in fs/xfs/xfs_attr_list.c in the Linux kernel before 4.5.1 allow local users to cause a denial of service (memory consumption) via crafted XFS filesystem operations.
<p>Publish Date: 2016-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9685>CVE-2016-9685</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9685">https://nvd.nist.gov/vuln/detail/CVE-2016-9685</a></p>
<p>Release Date: 2016-12-28</p>
<p>Fix Resolution: 4.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-9685 (Medium) detected in linuxlinux-4.6 - autoclosed - ## CVE-2016-9685 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2022-45934/commit/984ffa0a89a9fdf8f38e12acd409f6c51477abd3">984ffa0a89a9fdf8f38e12acd409f6c51477abd3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/xfs_attr_list.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple memory leaks in error paths in fs/xfs/xfs_attr_list.c in the Linux kernel before 4.5.1 allow local users to cause a denial of service (memory consumption) via crafted XFS filesystem operations.
<p>Publish Date: 2016-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9685>CVE-2016-9685</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-9685">https://nvd.nist.gov/vuln/detail/CVE-2016-9685</a></p>
<p>Release Date: 2016-12-28</p>
<p>Fix Resolution: 4.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files fs xfs xfs attr list c vulnerability details multiple memory leaks in error paths in fs xfs xfs attr list c in the linux kernel before allow local users to cause a denial of service memory consumption via crafted xfs filesystem operations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
66,815 | 8,971,422,043 | IssuesEvent | 2019-01-29 15:53:12 | Samsung/Universum | https://api.github.com/repos/Samsung/Universum | opened | Add user guide | documentation | Originally created on Wed, 26 Dec 2018 13:58:39 +0200
Add simple step-by-step guide as an example of usage to documentation. | 1.0 | Add user guide - Originally created on Wed, 26 Dec 2018 13:58:39 +0200
Add simple step-by-step guide as an example of usage to documentation. | non_priority | add user guide originally created on wed dec add simple step by step guide as an example of usage to documentation | 0 |
32,097 | 8,794,163,982 | IssuesEvent | 2018-12-21 23:39:13 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | reopened | Build from source issue - Bezel test fails | type:build/install | ### System information
- **OS Platform and Distribution: Linux Ubuntu 18.04:
- **TensorFlow version (git cloned from https://github.com/tensorflow/tensorflow (master):
- **Python version 3.6+ (virtual environment created with anaconda 5.3.1:
- **Bazel version 0.20.0:
- **GCC/Compiler version gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
- built-in intelGPU and memory: 8Gb:
- bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/...
information using environment capture script:
[tf_env.txt](https://github.com/tensorflow/tensorflow/files/2692606/tf_env.txt)
### Description of the problem
When I try to compile TF from source as described in the documentation, bazel test fails.
[tf_build_errors.txt](https://github.com/tensorflow/tensorflow/files/2692548/tf_build_errors.txt)
### Source code / logs
I face the problem just following the documentation step by step up to:
bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/...
| 1.0 | Build from source issue - Bezel test fails - ### System information
- **OS Platform and Distribution: Linux Ubuntu 18.04:
- **TensorFlow version (git cloned from https://github.com/tensorflow/tensorflow (master):
- **Python version 3.6+ (virtual environment created with anaconda 5.3.1:
- **Bazel version 0.20.0:
- **GCC/Compiler version gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
- built-in intelGPU and memory: 8Gb:
- bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/...
information using environment capture script:
[tf_env.txt](https://github.com/tensorflow/tensorflow/files/2692606/tf_env.txt)
### Description of the problem
When I try to compile TF from source as described in the documentation, bazel test fails.
[tf_build_errors.txt](https://github.com/tensorflow/tensorflow/files/2692548/tf_build_errors.txt)
### Source code / logs
I face the problem just following the documentation step by step up to:
bazel test -c opt -- //tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/...
| non_priority | build from source issue bezel test fails system information os platform and distribution linux ubuntu tensorflow version git cloned from master python version virtual environment created with anaconda bazel version gcc compiler version gcc ubuntu built in intelgpu and memory bazel test c opt tensorflow tensorflow compiler tensorflow lite information using environment capture script description of the problem when i try to compile tf from source as described in the documentation bazel test fails source code logs i face the problem just following the documentation step by step up to bazel test c opt tensorflow tensorflow compiler tensorflow lite | 0 |
80,761 | 10,056,128,615 | IssuesEvent | 2019-07-22 08:25:59 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | JavaVersion does not cover all available versions of Java | status: pending-design-work type: bug | We're missing Java 11 in 2.1.x. We also need to decide what to do about non-LTS versions (12 and 13) in 2.2. | 1.0 | JavaVersion does not cover all available versions of Java - We're missing Java 11 in 2.1.x. We also need to decide what to do about non-LTS versions (12 and 13) in 2.2. | non_priority | javaversion does not cover all available versions of java we re missing java in x we also need to decide what to do about non lts versions and in | 0 |
63,823 | 6,885,309,118 | IssuesEvent | 2017-11-21 15:45:04 | appium/appium | https://api.github.com/repos/appium/appium | closed | [XCUITest] Failed to create WDA session. Retrying... | NeedsInfo NotABug XCUITest |
Hi All,
I am getting similar error when trying to launch the app, even though I have not specified the bundle ID.I am running this on bitrise mac. box
Also , I have attached the ios logs
https://gist.github.com/zusmani-mbo-com/3dd2f3069ecfa0137bdbbc02f7fbedae
I have also looked at this issue, can not get it working still
https://github.com/appium/appium/issues/8373
* Appium version 1.7.1
* Desktop OS IOS:
* Mobile platform/version under test:
* emulator/simulator:
* Appium CLI :
| 1.0 | [XCUITest] Failed to create WDA session. Retrying... -
Hi All,
I am getting similar error when trying to launch the app, even though I have not specified the bundle ID.I am running this on bitrise mac. box
Also , I have attached the ios logs
https://gist.github.com/zusmani-mbo-com/3dd2f3069ecfa0137bdbbc02f7fbedae
I have also looked at this issue, can not get it working still
https://github.com/appium/appium/issues/8373
* Appium version 1.7.1
* Desktop OS IOS:
* Mobile platform/version under test:
* emulator/simulator:
* Appium CLI :
| non_priority | failed to create wda session retrying hi all i am getting similar error when trying to launch the app even though i have not specified the bundle id i am running this on bitrise mac box also i have attached the ios logs i have also looked at this issue can not get it working still appium version desktop os ios mobile platform version under test emulator simulator appium cli | 0 |
276,040 | 20,966,158,238 | IssuesEvent | 2022-03-28 06:57:45 | deepnight/ldtk | https://api.github.com/repos/deepnight/ldtk | closed | 0.10.0: EntityReferenceInfos doesn't appear to exist after quicktype | bug documentation Json | In the latest 0.10.0 commit, the Json Schema file doesn't appear to output this type of definition. `EntityReferenceInfos`
There appears to only be one match found, which is in the description of the value field: (Image)

| 1.0 | 0.10.0: EntityReferenceInfos doesn't appear to exist after quicktype - In the latest 0.10.0 commit, the Json Schema file doesn't appear to output this type of definition. `EntityReferenceInfos`
There appears to only be one match found, which is in the description of the value field: (Image)

| non_priority | entityreferenceinfos doesn t appear to exist after quicktype in the latest commit the json schema file doesn t appear to output this type of definition entityreferenceinfos there appears to only be one match found which is in the description of the value field image | 0 |
43,369 | 12,977,553,131 | IssuesEvent | 2020-07-21 20:52:49 | kenferrara/atlasdb | https://api.github.com/repos/kenferrara/atlasdb | opened | CVE-2018-10237 (Medium) detected in guava-23.6.1-jre.jar, guava-23.6-jre.jar | security vulnerability | ## CVE-2018-10237 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>guava-23.6.1-jre.jar</b>, <b>guava-23.6-jre.jar</b></p></summary>
<p>
<details><summary><b>guava-23.6.1-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to vulnerable library: /atlasdb/timelock-api/build/conjureCompiler/lib/guava-23.6.1-jre.jar</p>
<p>
Dependency Hierarchy:
- :x: **guava-23.6.1-jre.jar** (Vulnerable Library)
</details>
<details><summary><b>guava-23.6-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to dependency file: /tmp/ws-scm/atlasdb/leader-election-api/build.gradle</p>
<p>Path to vulnerable library: 20200721203135_LZXOVU/downloadResource_NZFWTT/20200721204615/guava-23.6-jre.jar,/tmp/ws-ua_20200721203135_LZXOVU/downloadResource_NZFWTT/20200721204615/guava-23.6-jre.jar</p>
<p>
Dependency Hierarchy:
- :x: **guava-23.6-jre.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/atlasdb/commit/8c390fb371cd05cd59ff7d2cd4e18016807af529">8c390fb371cd05cd59ff7d2cd4e18016807af529</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.
<p>Publish Date: 2018-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-10237>CVE-2018-10237</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10237">https://nvd.nist.gov/vuln/detail/CVE-2018-10237</a></p>
<p>Release Date: 2018-04-26</p>
<p>Fix Resolution: 24.1.1-jre, 24.1.1-android</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"23.6.1-jre","isTransitiveDependency":false,"dependencyTree":"com.google.guava:guava:23.6.1-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"24.1.1-jre, 24.1.1-android"},{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"23.6-jre","isTransitiveDependency":false,"dependencyTree":"com.google.guava:guava:23.6-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"24.1.1-jre, 24.1.1-android"}],"vulnerabilityIdentifier":"CVE-2018-10237","vulnerabilityDetails":"Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-10237","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-10237 (Medium) detected in guava-23.6.1-jre.jar, guava-23.6-jre.jar - ## CVE-2018-10237 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>guava-23.6.1-jre.jar</b>, <b>guava-23.6-jre.jar</b></p></summary>
<p>
<details><summary><b>guava-23.6.1-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to vulnerable library: /atlasdb/timelock-api/build/conjureCompiler/lib/guava-23.6.1-jre.jar</p>
<p>
Dependency Hierarchy:
- :x: **guava-23.6.1-jre.jar** (Vulnerable Library)
</details>
<details><summary><b>guava-23.6-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to dependency file: /tmp/ws-scm/atlasdb/leader-election-api/build.gradle</p>
<p>Path to vulnerable library: 20200721203135_LZXOVU/downloadResource_NZFWTT/20200721204615/guava-23.6-jre.jar,/tmp/ws-ua_20200721203135_LZXOVU/downloadResource_NZFWTT/20200721204615/guava-23.6-jre.jar</p>
<p>
Dependency Hierarchy:
- :x: **guava-23.6-jre.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/atlasdb/commit/8c390fb371cd05cd59ff7d2cd4e18016807af529">8c390fb371cd05cd59ff7d2cd4e18016807af529</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.
<p>Publish Date: 2018-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-10237>CVE-2018-10237</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10237">https://nvd.nist.gov/vuln/detail/CVE-2018-10237</a></p>
<p>Release Date: 2018-04-26</p>
<p>Fix Resolution: 24.1.1-jre, 24.1.1-android</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"23.6.1-jre","isTransitiveDependency":false,"dependencyTree":"com.google.guava:guava:23.6.1-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"24.1.1-jre, 24.1.1-android"},{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"23.6-jre","isTransitiveDependency":false,"dependencyTree":"com.google.guava:guava:23.6-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"24.1.1-jre, 24.1.1-android"}],"vulnerabilityIdentifier":"CVE-2018-10237","vulnerabilityDetails":"Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-10237","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in guava jre jar guava jre jar cve medium severity vulnerability vulnerable libraries guava jre jar guava jre jar guava jre jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more library home page a href path to vulnerable library atlasdb timelock api build conjurecompiler lib guava jre jar dependency hierarchy x guava jre jar vulnerable library guava jre jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more library home page a href path to dependency file tmp ws scm atlasdb leader election api build gradle path to vulnerable library lzxovu downloadresource nzfwtt guava jre jar tmp ws ua lzxovu downloadresource nzfwtt guava jre jar dependency hierarchy x guava jre jar vulnerable library found in head commit a href vulnerability details unbounded memory allocation in google guava through x before allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker provided data because the atomicdoublearray class when serialized with java serialization and the compoundordering class when serialized with gwt serialization perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jre android isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails unbounded memory allocation in google guava through x before allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker provided data because the atomicdoublearray class when serialized with java serialization and the compoundordering class when serialized with gwt serialization perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable vulnerabilityurl | 0 |
58,611 | 11,899,794,065 | IssuesEvent | 2020-03-30 09:35:06 | linewalks/MDwalks-UI | https://api.github.com/repos/linewalks/MDwalks-UI | closed | BarChart isScroll 인 경우 리펙토링 | Code clean | ## 기능에 관한 설명 또는 링크
scroll 기능을 넣으면서 코드 중복이 발생했다
## 원하는 솔루션에 대한 설명
코드 중복 제거와 test code 추가
## 요구 조건
- [ ] 코드 중복 제거
- [ ] test code
## 테스트 항목
- [ ]
## 완료 후 진행
PR 을 신청합니다
| 1.0 | BarChart isScroll 인 경우 리펙토링 - ## 기능에 관한 설명 또는 링크
scroll 기능을 넣으면서 코드 중복이 발생했다
## 원하는 솔루션에 대한 설명
코드 중복 제거와 test code 추가
## 요구 조건
- [ ] 코드 중복 제거
- [ ] test code
## 테스트 항목
- [ ]
## 완료 후 진행
PR 을 신청합니다
| non_priority | barchart isscroll 인 경우 리펙토링 기능에 관한 설명 또는 링크 scroll 기능을 넣으면서 코드 중복이 발생했다 원하는 솔루션에 대한 설명 코드 중복 제거와 test code 추가 요구 조건 코드 중복 제거 test code 테스트 항목 완료 후 진행 pr 을 신청합니다 | 0 |
25,286 | 6,648,523,989 | IssuesEvent | 2017-09-28 09:38:21 | Porucznik/Nexia-Home | https://api.github.com/repos/Porucznik/Nexia-Home | closed | Group and tag packages in archiso | code refactor | We need to clean our project a bit. Move nondefult packages to packages.x86_64, group them thematically, and tag gruops. | 1.0 | Group and tag packages in archiso - We need to clean our project a bit. Move nondefult packages to packages.x86_64, group them thematically, and tag gruops. | non_priority | group and tag packages in archiso we need to clean our project a bit move nondefult packages to packages group them thematically and tag gruops | 0 |
224,091 | 24,769,674,560 | IssuesEvent | 2022-10-23 01:06:17 | ChoeMinji/react-17.0.2 | https://api.github.com/repos/ChoeMinji/react-17.0.2 | opened | CVE-2022-37598 (High) detected in uglify-js-3.7.3.tgz, uglify-js-3.4.9.tgz | security vulnerability | ## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.7.3.tgz</b>, <b>uglify-js-3.4.9.tgz</b></p></summary>
<p>
<details><summary><b>uglify-js-3.7.3.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz</a></p>
<p>Path to dependency file: /fixtures/fiber-debugger/package.json</p>
<p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.11.tgz (Root Library)
- jest-20.0.4.tgz
- jest-cli-20.0.4.tgz
- istanbul-api-1.1.12.tgz
- istanbul-reports-1.1.1.tgz
- handlebars-4.5.3.tgz
- :x: **uglify-js-3.7.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.4.9.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-24.9.0.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- handlebars-4.5.1.tgz
- :x: **uglify-js-3.4.9.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (react-scripts): 1.0.12</p><p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (jest): 25.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-37598 (High) detected in uglify-js-3.7.3.tgz, uglify-js-3.4.9.tgz - ## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>uglify-js-3.7.3.tgz</b>, <b>uglify-js-3.4.9.tgz</b></p></summary>
<p>
<details><summary><b>uglify-js-3.7.3.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.7.3.tgz</a></p>
<p>Path to dependency file: /fixtures/fiber-debugger/package.json</p>
<p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.11.tgz (Root Library)
- jest-20.0.4.tgz
- jest-cli-20.0.4.tgz
- istanbul-api-1.1.12.tgz
- istanbul-reports-1.1.1.tgz
- handlebars-4.5.3.tgz
- :x: **uglify-js-3.7.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>uglify-js-3.4.9.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.9.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-24.9.0.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- handlebars-4.5.1.tgz
- :x: **uglify-js-3.4.9.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (react-scripts): 1.0.12</p><p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (jest): 25.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in uglify js tgz uglify js tgz cve high severity vulnerability vulnerable libraries uglify js tgz uglify js tgz uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file fixtures fiber debugger package json path to vulnerable library fixtures fiber debugger node modules uglify js package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz handlebars tgz x uglify js tgz vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href dependency hierarchy jest tgz root library jest cli tgz core tgz reporters tgz istanbul reports tgz handlebars tgz x uglify js tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function defnode in ast js in mishoo uglifyjs via the name variable in ast js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution uglify js direct dependency fix resolution react scripts fix resolution uglify js direct dependency fix resolution jest step up your open source security game with mend | 0 |
84,832 | 24,440,262,300 | IssuesEvent | 2022-10-06 14:10:21 | epicmaxco/vuestic-ui | https://api.github.com/repos/epicmaxco/vuestic-ui | closed | sandbox's tsconfig extends tsconfig from src/.nuxt folder which is empty after clean install. | BUG build sandbox | Need a postinstall script with mock.
see https://github.com/epicmaxco/vuestic-ui/issues/2407 | 1.0 | sandbox's tsconfig extends tsconfig from src/.nuxt folder which is empty after clean install. - Need a postinstall script with mock.
see https://github.com/epicmaxco/vuestic-ui/issues/2407 | non_priority | sandbox s tsconfig extends tsconfig from src nuxt folder which is empty after clean install need a postinstall script with mock see | 0 |
53,045 | 10,980,221,175 | IssuesEvent | 2019-11-30 12:39:03 | tobiasanker/SakuraTree | https://api.github.com/repos/tobiasanker/SakuraTree | closed | rework blossom-output | code cleanup / QA documentation feature / enhancement usability | ## Feature-request
### Description
The current implementation of writing blossom-output into a variable work, but doesn't scales very good. With many variables, its hard to read. So there should be another solution.
### Related Issue
#29
### Kitsunemimi-Repos, which have to be updated
### Possible Implementation
| 1.0 | rework blossom-output - ## Feature-request
### Description
The current implementation of writing blossom-output into a variable work, but doesn't scales very good. With many variables, its hard to read. So there should be another solution.
### Related Issue
#29
### Kitsunemimi-Repos, which have to be updated
### Possible Implementation
| non_priority | rework blossom output feature request description the current implementation of writing blossom output into a variable work but doesn t scales very good with many variables its hard to read so there should be another solution related issue kitsunemimi repos which have to be updated possible implementation | 0 |
93,619 | 8,439,225,747 | IssuesEvent | 2018-10-18 00:37:01 | QubesOS/qubes-issues | https://api.github.com/repos/QubesOS/qubes-issues | reopened | Idea: qvm-sync-appmenus also parsing /usr/local/share/applications | C: desktop-linux P: minor enhancement help wanted r4.0-buster-cur-test r4.0-centos7-cur-test r4.0-fc26-cur-test r4.0-fc27-cur-test r4.0-fc28-cur-test r4.0-jessie-cur-test r4.0-stretch-cur-test | ### Qubes OS version:
4.0
### Affected component(s):
Application menu syncing
### Steps to reproduce the behavior:
1. Install any locally installed app that a user may want in a particular VM, but not other VMs and they do not want to make it installed in all VMs and make a bunch of clones of VMs (as that creates its own problems)
2. Desktop file will be placed in /usr/local/share/applications
3. Sync the app menu and you will see it does not detect the apps
### Expected behavior:
It would be nice if desktop files also looked in /usr/local/share/applications. This provides a mechanism to install software locally without making a lot of template clones.
I understand why ~./.local/share/applications is excluded because it appears some apps install to both that and /usr/share/applications so that would require trying to parse out duplicates etc.
However, no packaged apps install into /usr/local/share/applications and that folder is local to the AppVM too.
### Motivation for locally installed apps
The primary motivation is because a single change does not warrant a template clone as that creates a bunch of other issues (resource issues, menu cluttering issues, etc).
1. The appvm wants to run a different version of software than those in other appvms
2. The application is less trusted so better to keep it entirely contained in the appvm
3. The software only belongs in one appvm and its not efficient to create a new template just because of that one software
4. The software is downloaded from the internet, only belongs in oen appvm, and it makes little since to transfer it to the templatevm (as that has no internet to download it) when it can just be kept in that appvm which is the only place it should run. | 7.0 | Idea: qvm-sync-appmenus also parsing /usr/local/share/applications - ### Qubes OS version:
4.0
### Affected component(s):
Application menu syncing
### Steps to reproduce the behavior:
1. Install any locally installed app that a user may want in a particular VM, but not other VMs and they do not want to make it installed in all VMs and make a bunch of clones of VMs (as that creates its own problems)
2. Desktop file will be placed in /usr/local/share/applications
3. Sync the app menu and you will see it does not detect the apps
### Expected behavior:
It would be nice if desktop files also looked in /usr/local/share/applications. This provides a mechanism to install software locally without making a lot of template clones.
I understand why ~./.local/share/applications is excluded because it appears some apps install to both that and /usr/share/applications so that would require trying to parse out duplicates etc.
However, no packaged apps install into /usr/local/share/applications and that folder is local to the AppVM too.
### Motivation for locally installed apps
The primary motivation is because a single change does not warrant a template clone as that creates a bunch of other issues (resource issues, menu cluttering issues, etc).
1. The appvm wants to run a different version of software than those in other appvms
2. The application is less trusted so better to keep it entirely contained in the appvm
3. The software only belongs in one appvm and its not efficient to create a new template just because of that one software
4. The software is downloaded from the internet, only belongs in oen appvm, and it makes little since to transfer it to the templatevm (as that has no internet to download it) when it can just be kept in that appvm which is the only place it should run. | non_priority | idea qvm sync appmenus also parsing usr local share applications qubes os version affected component s application menu syncing steps to reproduce the behavior install any locally installed app that a user may want in a particular vm but not other vms and they do not want to make it installed in all vms and make a bunch of clones of vms as that creates its own problems desktop file will be placed in usr local share applications sync the app menu and you will see it does not detect the apps expected behavior it would be nice if desktop files also looked in usr local share applications this provides a mechanism to install software locally without making a lot of template clones i understand why local share applications is excluded because it appears some apps install to both that and usr share applications so that would require trying to parse out duplicates etc however no packaged apps install into usr local share applications and that folder is local to the appvm too motivation for locally installed apps the primary motivation is because a single change does not warrant a template clone as that creates a bunch of other issues resource issues menu cluttering issues etc the appvm wants to run a different version of software than those in other appvms the application is less trusted so better to keep it entirely contained in the appvm the software only belongs in one appvm and its not efficient to create a new template just because of that one software the software is downloaded from the internet only belongs in oen appvm and it makes little since to transfer it to the templatevm as that has no internet to download it when it can just be kept in that appvm which is the only place it should run | 0 |
70,896 | 8,596,520,140 | IssuesEvent | 2018-11-15 16:10:42 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Add permalinks to the document sidebar | Needs Design Feedback Needs Dev | The current problem is permalinks aren't that discoverable. This has been reported in feedback. Right now, the title on select show the permalink. In phase 2 this likely won't be a case moving into everything being a block, so changing now is important.
@melchoyce worked on a suggested design for this which adds to the document settings (note this adds doesn't remove from title for now):

- On select of title won't be changing, it will still show permalink.
- Publishing flow will still show permalink.
- This is a new panel to be added to document settings.
This ideally would be added before 5.0 as really could help people now. It is based on the existing pattern in the publishing flow, which means it should be easier to implement.
One feature that may want to be added is the ability to copy the link from here easily.
As time is of the essence having this move into a PR would be great to get a feel of this. | 1.0 | Add permalinks to the document sidebar - The current problem is permalinks aren't that discoverable. This has been reported in feedback. Right now, the title on select show the permalink. In phase 2 this likely won't be a case moving into everything being a block, so changing now is important.
@melchoyce worked on a suggested design for this which adds to the document settings (note this adds doesn't remove from title for now):

- On select of title won't be changing, it will still show permalink.
- Publishing flow will still show permalink.
- This is a new panel to be added to document settings.
This ideally would be added before 5.0 as really could help people now. It is based on the existing pattern in the publishing flow, which means it should be easier to implement.
One feature that may want to be added is the ability to copy the link from here easily.
As time is of the essence having this move into a PR would be great to get a feel of this. | non_priority | add permalinks to the document sidebar the current problem is permalinks aren t that discoverable this has been reported in feedback right now the title on select show the permalink in phase this likely won t be a case moving into everything being a block so changing now is important melchoyce worked on a suggested design for this which adds to the document settings note this adds doesn t remove from title for now on select of title won t be changing it will still show permalink publishing flow will still show permalink this is a new panel to be added to document settings this ideally would be added before as really could help people now it is based on the existing pattern in the publishing flow which means it should be easier to implement one feature that may want to be added is the ability to copy the link from here easily as time is of the essence having this move into a pr would be great to get a feel of this | 0 |
11,088 | 13,930,014,131 | IssuesEvent | 2020-10-22 01:15:25 | fluent/fluent-bit | https://api.github.com/repos/fluent/fluent-bit | closed | WinLog INPUT: include the StringInserts key-value pairs into the log record | work-in-process | **Is your feature request related to a problem? Please describe.**
In [this pull request](https://github.com/fluent/fluent-bit/pull/2322), the `StringInserts` [were removed from the resulting log record](https://github.com/fluent/fluent-bit/pull/2322/files#diff-0890d3b5666d8c56708c223ac7bc54a3L267-L269). A formatted `Message` containing the human-readable message [was included instead](https://github.com/fluent/fluent-bit/pull/2322/files#diff-44387dae255041c828cb88280efd027fR401-R406).
This solution is really useful to visualize the resulting message, but it would also be good to include the key-value pairs present in the `StringInserts` as log record attributes.
In FluentBit 1.4.1, a `winlog` contained a `StringInserts` field like the following:
```
[42] winlog.0: [1596733503.081829800, {"RecordNumber"=>43, "TimeGenerated"=>1585850266, "TimeWritten"=>1585850266, "EventID"=>600, ...
"StringInserts"=>["Variable", "Started", " ProviderName=Variable
NewProviderState=Started
SequenceNumber=11
HostName=ConsoleHost
HostVersion=5.1.18362.145
...
RunspaceId=
PipelineId=
CommandName=
CommandType=
ScriptName=
CommandPath=
CommandLine="], "Sid"=>"", "Data"=>""}]
```
It would be really nice if the resulting log record contained all the key-value pairs present in the `StringInserts`, that is, `NewProviderState`, `SequenceNumber`, `HostName`, `HostVersion`, `RunspaceId` **(even if it is empty)**...
**Describe the solution you'd like**
Include all the key-value pairs present in `StringInserts` as log record attributes.
**Additional context**
Even though the `Message` field is very useful as it is human-readable, including all the `StringInserts` key-value pairs as log attributes would enable the user to filter by these registry key values more easily later. | 1.0 | WinLog INPUT: include the StringInserts key-value pairs into the log record - **Is your feature request related to a problem? Please describe.**
In [this pull request](https://github.com/fluent/fluent-bit/pull/2322), the `StringInserts` [were removed from the resulting log record](https://github.com/fluent/fluent-bit/pull/2322/files#diff-0890d3b5666d8c56708c223ac7bc54a3L267-L269). A formatted `Message` containing the human-readable message [was included instead](https://github.com/fluent/fluent-bit/pull/2322/files#diff-44387dae255041c828cb88280efd027fR401-R406).
This solution is really useful to visualize the resulting message, but it would also be good to include the key-value pairs present in the `StringInserts` as log record attributes.
In FluentBit 1.4.1, a `winlog` contained a `StringInserts` field like the following:
```
[42] winlog.0: [1596733503.081829800, {"RecordNumber"=>43, "TimeGenerated"=>1585850266, "TimeWritten"=>1585850266, "EventID"=>600, ...
"StringInserts"=>["Variable", "Started", " ProviderName=Variable
NewProviderState=Started
SequenceNumber=11
HostName=ConsoleHost
HostVersion=5.1.18362.145
...
RunspaceId=
PipelineId=
CommandName=
CommandType=
ScriptName=
CommandPath=
CommandLine="], "Sid"=>"", "Data"=>""}]
```
It would be really nice if the resulting log record contained all the key-value pairs present in the `StringInserts`, that is, `NewProviderState`, `SequenceNumber`, `HostName`, `HostVersion`, `RunspaceId` **(even if it is empty)**...
**Describe the solution you'd like**
Include all the key-value pairs present in `StringInserts` as log record attributes.
**Additional context**
Even though the `Message` field is very useful as it is human-readable, including all the `StringInserts` key-value pairs as log attributes would enable the user to filter by these registry key values more easily later. | non_priority | winlog input include the stringinserts key value pairs into the log record is your feature request related to a problem please describe in the stringinserts a formatted message containing the human readable message this solution is really useful to visualize the resulting message but it would also be good to include the key value pairs present in the stringinserts as log record attributes in fluentbit a winlog contained a stringinserts field like the following winlog recordnumber timegenerated timewritten eventid stringinserts variable started providername variable newproviderstate started sequencenumber hostname consolehost hostversion runspaceid pipelineid commandname commandtype scriptname commandpath commandline sid data it would be really nice if the resulting log record contained all the key value pairs present in the stringinserts that is newproviderstate sequencenumber hostname hostversion runspaceid even if it is empty describe the solution you d like include all the key value pairs present in stringinserts as log record attributes additional context even though the message field is very useful as it is human readable including all the stringinserts key value pairs as log attributes would enable the user to filter by these registry key values more easily later | 0 |
99,177 | 12,403,809,020 | IssuesEvent | 2020-05-21 14:31:29 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | opened | Workshops Tab | Gitcoin Hackathon design | <!--
Hello Gitcoiner!
Please use the template below for feature requests for Gitcoin.
If it is general support you need, reach out to us at
gitcoin.co/slack
-->
### User Story
As Gitcoin we'd like hackers to know all the workshops happening and allow devs to participate.
### Why Is this Needed
No central place to see all the workshops
### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Feature
### Definition of Done
- [ ] Build workshops tab
- [ ] Add ability to set up calendar
- [ ] Add ability to join
- [ ] Create flow to create workshop event
### Data Requirements
[comment]: # (How will we measure the success of this feature? What kind of tracking is needed for this feature (clicks, impressions, flag)?)
### Additional Information


| 1.0 | Workshops Tab - <!--
Hello Gitcoiner!
Please use the template below for feature requests for Gitcoin.
If it is general support you need, reach out to us at
gitcoin.co/slack
-->
### User Story
As Gitcoin we'd like hackers to know all the workshops happening and allow devs to participate.
### Why Is this Needed
No central place to see all the workshops
### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Feature
### Definition of Done
- [ ] Build workshops tab
- [ ] Add ability to set up calendar
- [ ] Add ability to join
- [ ] Create flow to create workshop event
### Data Requirements
[comment]: # (How will we measure the success of this feature? What kind of tracking is needed for this feature (clicks, impressions, flag)?)
### Additional Information


| non_priority | workshops tab hello gitcoiner please use the template below for feature requests for gitcoin if it is general support you need reach out to us at gitcoin co slack user story as gitcoin we d like hackers to know all the workshops happening and allow devs to participate why is this needed no central place to see all the workshops description feature or bug i e type bug type feature definition of done build workshops tab add ability to set up calendar add ability to join create flow to create workshop event data requirements how will we measure the success of this feature what kind of tracking is needed for this feature clicks impressions flag additional information | 0 |
220,258 | 24,564,790,771 | IssuesEvent | 2022-10-13 01:13:01 | turkdevops/desktop | https://api.github.com/repos/turkdevops/desktop | closed | CVE-2022-0355 (High) detected in simple-get-2.8.1.tgz, simple-get-3.1.0.tgz - autoclosed | security vulnerability | ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>simple-get-2.8.1.tgz</b>, <b>simple-get-3.1.0.tgz</b></p></summary>
<p>
<details><summary><b>simple-get-2.8.1.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-2.8.1.tgz">https://registry.npmjs.org/simple-get/-/simple-get-2.8.1.tgz</a></p>
<p>
Dependency Hierarchy:
- registry-js-1.6.0.tgz (Root Library)
- prebuild-install-5.3.0.tgz
- :x: **simple-get-2.8.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>
Dependency Hierarchy:
- keytar-5.6.0.tgz (Root Library)
- prebuild-install-5.3.3.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/desktop/commit/9e0c818b6cb48aa77f07a97653da926d8fb70362">9e0c818b6cb48aa77f07a97653da926d8fb70362</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 2.8.2</p>
<p>Direct dependency fix Resolution (registry-js): 1.7.0</p><p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (keytar): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0355 (High) detected in simple-get-2.8.1.tgz, simple-get-3.1.0.tgz - autoclosed - ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>simple-get-2.8.1.tgz</b>, <b>simple-get-3.1.0.tgz</b></p></summary>
<p>
<details><summary><b>simple-get-2.8.1.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-2.8.1.tgz">https://registry.npmjs.org/simple-get/-/simple-get-2.8.1.tgz</a></p>
<p>
Dependency Hierarchy:
- registry-js-1.6.0.tgz (Root Library)
- prebuild-install-5.3.0.tgz
- :x: **simple-get-2.8.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>
Dependency Hierarchy:
- keytar-5.6.0.tgz (Root Library)
- prebuild-install-5.3.3.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/desktop/commit/9e0c818b6cb48aa77f07a97653da926d8fb70362">9e0c818b6cb48aa77f07a97653da926d8fb70362</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 2.8.2</p>
<p>Direct dependency fix Resolution (registry-js): 1.7.0</p><p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (keytar): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in simple get tgz simple get tgz autoclosed cve high severity vulnerability vulnerable libraries simple get tgz simple get tgz simple get tgz simplest way to make http get requests supports https redirects gzip deflate streams in library home page a href dependency hierarchy registry js tgz root library prebuild install tgz x simple get tgz vulnerable library simple get tgz simplest way to make http get requests supports https redirects gzip deflate streams in library home page a href dependency hierarchy keytar tgz root library prebuild install tgz x simple get tgz vulnerable library found in head commit a href found in base branch development vulnerability details exposure of sensitive information to an unauthorized actor in npm simple get prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution simple get direct dependency fix resolution registry js fix resolution simple get direct dependency fix resolution keytar step up your open source security game with mend | 0 |
17,128 | 3,594,474,897 | IssuesEvent | 2016-02-01 23:52:35 | googlefonts/fontbakery | https://api.github.com/repos/googlefonts/fontbakery | closed | Extend checker to allow automated fixes | testing | Following #221
Tests can have associated automatic fixes, so if a test fails, it can be automatically fixed.
For example, there should be a test for a nbsp glyph present, a space glyph present, correct unicode points assigned to each, and both with the same width
If this test fails, the fix is to make a nbsp with the width of the space. Behdad wrote [a fonttools script to do exactly this](https://gist.github.com/davelab6/5c865bb658b05c7bc37c). The moment for this to be run during the build process is after ufo/ttx is compiled to ttf and before ttfautohinting and subsetting.
These tests and fixes should be run on the final TTFs produced by the build process only, since the failing tests will alert people to make the fixes upstream.
- [x] show process of fixer in build log | 1.0 | Extend checker to allow automated fixes - Following #221
Tests can have associated automatic fixes, so if a test fails, it can be automatically fixed.
For example, there should be a test for a nbsp glyph present, a space glyph present, correct unicode points assigned to each, and both with the same width
If this test fails, the fix is to make a nbsp with the width of the space. Behdad wrote [a fonttools script to do exactly this](https://gist.github.com/davelab6/5c865bb658b05c7bc37c). The moment for this to be run during the build process is after ufo/ttx is compiled to ttf and before ttfautohinting and subsetting.
These tests and fixes should be run on the final TTFs produced by the build process only, since the failing tests will alert people to make the fixes upstream.
- [x] show process of fixer in build log | non_priority | extend checker to allow automated fixes following tests can have associated automatic fixes so if a test fails it can be automatically fixed for example there should be a test for a nbsp glyph present a space glyph present correct unicode points assigned to each and both with the same width if this test fails the fix is to make a nbsp with the width of the space behdad wrote the moment for this to be run during the build process is after ufo ttx is compiled to ttf and before ttfautohinting and subsetting these tests and fixes should be run on the final ttfs produced by the build process only since the failing tests will alert people to make the fixes upstream show process of fixer in build log | 0 |
153,785 | 19,708,600,905 | IssuesEvent | 2022-01-13 01:44:17 | artsking/linux-4.19.72_CVE-2020-14386 | https://api.github.com/repos/artsking/linux-4.19.72_CVE-2020-14386 | opened | CVE-2019-19067 (Medium) detected in linux-yoctov5.4.51 | security vulnerability | ## CVE-2019-19067 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Four memory leaks in the acp_hw_init() function in drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c in the Linux kernel before 5.3.8 allow attackers to cause a denial of service (memory consumption) by triggering mfd_add_hotplug_devices() or pm_genpd_add_device() failures, aka CID-57be09c6e874. NOTE: third parties dispute the relevance of this because the attacker must already have privileges for module loading.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19067>CVE-2019-19067</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19067">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19067</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19067 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2019-19067 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Four memory leaks in the acp_hw_init() function in drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c in the Linux kernel before 5.3.8 allow attackers to cause a denial of service (memory consumption) by triggering mfd_add_hotplug_devices() or pm_genpd_add_device() failures, aka CID-57be09c6e874. NOTE: third parties dispute the relevance of this because the attacker must already have privileges for module loading.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19067>CVE-2019-19067</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19067">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19067</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files drivers gpu drm amd amdgpu amdgpu acp c drivers gpu drm amd amdgpu amdgpu acp c vulnerability details disputed four memory leaks in the acp hw init function in drivers gpu drm amd amdgpu amdgpu acp c in the linux kernel before allow attackers to cause a denial of service memory consumption by triggering mfd add hotplug devices or pm genpd add device failures aka cid note third parties dispute the relevance of this because the attacker must already have privileges for module loading publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
19,641 | 10,382,671,245 | IssuesEvent | 2019-09-10 08:00:55 | AOSC-Dev/aosc-os-abbs | https://api.github.com/repos/AOSC-Dev/aosc-os-abbs | closed | graphviz: CVE-2019-11023 | needs-triage security to-stable | <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2019-11023
**Other security advisory IDs:** openSUSE-SU-2019:1434-1
**Descriptions:**
- CVE-2019-11023: Fixed a denial of service vulnerability, which was
caused by a NULL pointer dereference in agroot() (bsc#1132091).
**Patches:** from openSUSE
**PoC(s):** https://gitlab.com/graphviz/graphviz/issues/1517
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [x] AArch64 `arm64`
- [x] ARMv7 `armel`
- [x] PowerPC 64-bit BE `ppc64`
- [x] PowerPC 32-bit BE `powerpc` | True | graphviz: CVE-2019-11023 - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2019-11023
**Other security advisory IDs:** openSUSE-SU-2019:1434-1
**Descriptions:**
- CVE-2019-11023: Fixed a denial of service vulnerability, which was
caused by a NULL pointer dereference in agroot() (bsc#1132091).
**Patches:** from openSUSE
**PoC(s):** https://gitlab.com/graphviz/graphviz/issues/1517
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [x] AArch64 `arm64`
- [x] ARMv7 `armel`
- [x] PowerPC 64-bit BE `ppc64`
- [x] PowerPC 32-bit BE `powerpc` | non_priority | graphviz cve cve ids cve other security advisory ids opensuse su descriptions cve fixed a denial of service vulnerability which was caused by a null pointer dereference in agroot bsc patches from opensuse poc s architectural progress armel powerpc bit be powerpc bit be powerpc | 0 |
151,590 | 12,044,125,301 | IssuesEvent | 2020-04-14 13:34:11 | Oldes/Rebol-issues | https://api.github.com/repos/Oldes/Rebol-issues | closed | UNIQUE/DIFFERENCE/INTERSECT/UNION do not accept blocks containing values of type NONE! or UNSET! | Test.written Type.bug | _Submitted by:_ **Ch.Ensel**
Unsure whether this is a bug or a feature.
``` rebol
>> unique [#[none]]
** Script error: none! type is not allowed here
** Where: unique
** Near: unique [none]
>> union [#[none]] []
** Script error: none! type is not allowed here
** Where: union
** Near: union [none] []
>> difference [#[none]] []
** Script error: none! type is not allowed here
** Where: difference
** Near: difference [none] []
>> intersect [#[none]] []
** Script error: none! type is not allowed here
** Where: intersect
** Near: intersect [none] []
>> unique reduce [()]
** Script error: unset! type is not allowed here
** Where: unique
** Near: unique reduce [()]
>>
>> unique [()]
** Script error: paren! type is not allowed here
** Where: unique
** Near: unique [()]
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1592)** [ Version: alpha 97 Type: Bug Platform: All Category: Native Reproduce: Always Fixed-in:alpha 108 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1592</sup>
Comments:
---
> **Rebolbot** commented on May 2, 2010:
_Submitted by:_ **Sunanda**
This is a generalisation / more detailed variant of CC#1124
---
> **Rebolbot** commented on May 2, 2010:
_Submitted by:_ **BrianH**
A generalization of the unset! part, but the rest is a separate issue. The reason that #1124 is marked as a problem doesn't apply to the none! and paren! types.
---
> **Rebolbot** commented on Sep 22, 2010:
_Submitted by:_ **Carl**
Fixed. None and unset added. Paren is like block, so not added here.
---
> **Rebolbot** added the **Type.bug** on Jan 12, 2016
--- | 1.0 | UNIQUE/DIFFERENCE/INTERSECT/UNION do not accept blocks containing values of type NONE! or UNSET! - _Submitted by:_ **Ch.Ensel**
Unsure whether this is a bug or a feature.
``` rebol
>> unique [#[none]]
** Script error: none! type is not allowed here
** Where: unique
** Near: unique [none]
>> union [#[none]] []
** Script error: none! type is not allowed here
** Where: union
** Near: union [none] []
>> difference [#[none]] []
** Script error: none! type is not allowed here
** Where: difference
** Near: difference [none] []
>> intersect [#[none]] []
** Script error: none! type is not allowed here
** Where: intersect
** Near: intersect [none] []
>> unique reduce [()]
** Script error: unset! type is not allowed here
** Where: unique
** Near: unique reduce [()]
>>
>> unique [()]
** Script error: paren! type is not allowed here
** Where: unique
** Near: unique [()]
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1592)** [ Version: alpha 97 Type: Bug Platform: All Category: Native Reproduce: Always Fixed-in:alpha 108 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1592</sup>
Comments:
---
> **Rebolbot** commented on May 2, 2010:
_Submitted by:_ **Sunanda**
This is a generalisation / more detailed variant of CC#1124
---
> **Rebolbot** commented on May 2, 2010:
_Submitted by:_ **BrianH**
A generalization of the unset! part, but the rest is a separate issue. The reason that #1124 is marked as a problem doesn't apply to the none! and paren! types.
---
> **Rebolbot** commented on Sep 22, 2010:
_Submitted by:_ **Carl**
Fixed. None and unset added. Paren is like block, so not added here.
---
> **Rebolbot** added the **Type.bug** on Jan 12, 2016
--- | non_priority | unique difference intersect union do not accept blocks containing values of type none or unset submitted by ch ensel unsure whether this is a bug or a feature rebol unique script error none type is not allowed here where unique near unique union script error none type is not allowed here where union near union difference script error none type is not allowed here where difference near difference intersect script error none type is not allowed here where intersect near intersect unique reduce script error unset type is not allowed here where unique near unique reduce unique script error paren type is not allowed here where unique near unique imported from imported from comments rebolbot commented on may submitted by sunanda this is a generalisation more detailed variant of cc rebolbot commented on may submitted by brianh a generalization of the unset part but the rest is a separate issue the reason that is marked as a problem doesn t apply to the none and paren types rebolbot commented on sep submitted by carl fixed none and unset added paren is like block so not added here rebolbot added the type bug on jan | 0 |
82,053 | 15,646,496,183 | IssuesEvent | 2021-03-23 01:03:32 | LevyForchh/calm-dsl | https://api.github.com/repos/LevyForchh/calm-dsl | opened | CVE-2021-25290 (Medium) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2021-25290 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: calm-dsl/requirements.txt</p>
<p>Path to vulnerable library: calm-dsl/requirements.txt</p>
<p>
Dependency Hierarchy:
- asciimatics-1.11.0-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Pillow","packageVersion":"6.2.2","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"asciimatics:1.11.0;Pillow:6.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"8.1.1"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-25290","vulnerabilityDetails":"A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-25290 (Medium) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-25290 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: calm-dsl/requirements.txt</p>
<p>Path to vulnerable library: calm-dsl/requirements.txt</p>
<p>
Dependency Hierarchy:
- asciimatics-1.11.0-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Pillow","packageVersion":"6.2.2","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"asciimatics:1.11.0;Pillow:6.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"8.1.1"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-25290","vulnerabilityDetails":"A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in pillow whl cve medium severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file calm dsl requirements txt path to vulnerable library calm dsl requirements txt dependency hierarchy asciimatics none any whl root library x pillow whl vulnerable library vulnerability details a security issue was found in python pillow before version in tiffdecode c there is a negative offset memcpy with an invalid size publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree asciimatics pillow isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a security issue was found in python pillow before version in tiffdecode c there is a negative offset memcpy with an invalid size vulnerabilityurl | 0 |
109,450 | 9,381,632,878 | IssuesEvent | 2019-04-04 20:08:28 | open-apparel-registry/open-apparel-registry | https://api.github.com/repos/open-apparel-registry/open-apparel-registry | closed | Manually rejecting all potential matches when the FacilityListItem has no geocoded address raises an IntegrityError | + bug rollbar tested/verified | ## Overview
Manually rejecting all potential matches when the FacilityListItem has no geocoded address raises an IntegrityError
### Expected Behavior
Manually rejecting all potential matches when the FacilityListItem has no geocoded address sets the status to "ERROR_MATCHING"
### Actual Behavior
An unhandled IntegrityError is raised
### Steps to Reproduce
* `./scripts/manage resetdb` and `./scripts/manage processfixtures`
* Upload [2019-03-28-match.csv.txt](https://github.com/open-apparel-registry/open-apparel-registry/files/3019700/2019-03-28-match.csv.txt)
* Run `./scripts/manage batch_process --list-id 16 --action parse`
* Run `./scripts/manage dbshell` and execute the following command to simulate a no-results geocode response
```sql
UPDATE api_facilitylistitem
SET geocoded_point = null, status = 'GEOCODED_NO_RESULTS'
WHERE facility_list_id = 16;
```
* Run `./scripts/manage batch_process --list-id 16 --action match`
* Browse http://localhost:6543/lists/16 and attempt to reject the item
### Additional context
This is the point in the view code where we attempt to create a facility.
https://github.com/open-apparel-registry/open-apparel-registry/blob/2a219c07328d88239618269cf9e8849938641c0f/src/django/api/views.py#L1062-L1070
View details in Rollbar: [https://rollbar.com/OpenApparelRegistry/OpenApparelRegistry/items/61/](https://rollbar.com/OpenApparelRegistry/OpenApparelRegistry/items/61/)
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 126, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rest_framework/viewsets.py", line 116, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/usr/local/src/api/views.py", line 1070, in reject_match
created_from=facility_list_item)
File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 417, in create
obj.save(force_insert=True, using=self.db)
File "/usr/local/src/api/models.py", line 391, in save
super(Facility, self).save(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 729, in save
force_update=force_update, update_fields=update_fields)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 759, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 842, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 880, in _do_insert
using=using, raw=raw)
File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 1128, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1285, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
IntegrityError: null value in column "location" violates not-null constraint
DETAIL: Failing row contains (PK2019086ADF6PQ, ARTISTIC F&G UNIT K-2, Plot # 53, 54, 71 & 72, Sector 28, Korangi Industrial Area, Kara..., PK, null, 2019-03-27 17:46:32.741206+00, 2019-03-27 17:46:32.741229+00, 5835).
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
IntegrityError: null value in column "location" violates not-null constraint
DETAIL: Failing row contains (PK2019086ADF6PQ, ARTISTIC F&G UNIT K-2, Plot # 53, 54, 71 & 72, Sector 28, Korangi Industrial Area, Kara..., PK, null, 2019-03-27 17:46:32.741206+00, 2019-03-27 17:46:32.741229+00, 5835).
``` | 1.0 | Manually rejecting all potential matches when the FacilityListItem has no geocoded address raises an IntegrityError - ## Overview
Manually rejecting all potential matches when the FacilityListItem has no geocoded address raises an IntegrityError
### Expected Behavior
Manually rejecting all potential matches when the FacilityListItem has no geocoded address sets the status to "ERROR_MATCHING"
### Actual Behavior
An unhandled IntegrityError is raised
### Steps to Reproduce
* `./scripts/manage resetdb` and `./scripts/manage processfixtures`
* Upload [2019-03-28-match.csv.txt](https://github.com/open-apparel-registry/open-apparel-registry/files/3019700/2019-03-28-match.csv.txt)
* Run `./scripts/manage batch_process --list-id 16 --action parse`
* Run `./scripts/manage dbshell` and execute the following command to simulate a no-results geocode response
```sql
UPDATE api_facilitylistitem
SET geocoded_point = null, status = 'GEOCODED_NO_RESULTS'
WHERE facility_list_id = 16;
```
* Run `./scripts/manage batch_process --list-id 16 --action match`
* Browse http://localhost:6543/lists/16 and attempt to reject the item
### Additional context
This is the point in the view code where we attempt to create a facility.
https://github.com/open-apparel-registry/open-apparel-registry/blob/2a219c07328d88239618269cf9e8849938641c0f/src/django/api/views.py#L1062-L1070
View details in Rollbar: [https://rollbar.com/OpenApparelRegistry/OpenApparelRegistry/items/61/](https://rollbar.com/OpenApparelRegistry/OpenApparelRegistry/items/61/)
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 126, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rest_framework/viewsets.py", line 116, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/usr/local/src/api/views.py", line 1070, in reject_match
created_from=facility_list_item)
File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 417, in create
obj.save(force_insert=True, using=self.db)
File "/usr/local/src/api/models.py", line 391, in save
super(Facility, self).save(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 729, in save
force_update=force_update, update_fields=update_fields)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 759, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 842, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 880, in _do_insert
using=using, raw=raw)
File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 1128, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1285, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
IntegrityError: null value in column "location" violates not-null constraint
DETAIL: Failing row contains (PK2019086ADF6PQ, ARTISTIC F&G UNIT K-2, Plot # 53, 54, 71 & 72, Sector 28, Korangi Industrial Area, Kara..., PK, null, 2019-03-27 17:46:32.741206+00, 2019-03-27 17:46:32.741229+00, 5835).
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
IntegrityError: null value in column "location" violates not-null constraint
DETAIL: Failing row contains (PK2019086ADF6PQ, ARTISTIC F&G UNIT K-2, Plot # 53, 54, 71 & 72, Sector 28, Korangi Industrial Area, Kara..., PK, null, 2019-03-27 17:46:32.741206+00, 2019-03-27 17:46:32.741229+00, 5835).
``` | non_priority | manually rejecting all potential matches when the facilitylistitem has no geocoded address raises an integrityerror overview manually rejecting all potential matches when the facilitylistitem has no geocoded address raises an integrityerror expected behavior manually rejecting all potential matches when the facilitylistitem has no geocoded address sets the status to error matching actual behavior an unhandled integrityerror is raised steps to reproduce scripts manage resetdb and scripts manage processfixtures upload run scripts manage batch process list id action parse run scripts manage dbshell and execute the following command to simulate a no results geocode response sql update api facilitylistitem set geocoded point null status geocoded no results where facility list id run scripts manage batch process list id action match browse and attempt to reject the item additional context this is the point in the view code where we attempt to create a facility view details in rollbar traceback most recent call last file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception self raise uncaught exception exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib contextlib py line in inner return func args kwds file usr local src api views py line in reject match created from facility list item file usr local lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file usr local lib site packages django db models query py line in create obj save force insert true using self db file usr local src api models py line in save super facility self save args kwargs file usr local lib site packages django db models base py line in save force update force update update fields update fields file usr local lib site packages django db models base py line in save base updated self save table raw cls force insert force update using update fields file usr local lib site packages django db models base py line in save table result self do insert cls base manager using fields update pk raw file usr local lib site packages django db models base py line in do insert using using raw raw file usr local lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file usr local lib site packages django db models query py line in insert return query get compiler using using execute sql return id file usr local lib site packages django db models sql compiler py line in execute sql cursor execute sql params file usr local lib site packages django db backends utils py line in execute return self execute with wrappers sql params many false executor self execute file usr local lib site packages django db backends utils py line in execute with wrappers return executor sql params many context file usr local lib site packages django db backends utils py line in execute return self cursor execute sql params file usr local lib site packages django db utils py line in exit raise dj exc value with traceback traceback from exc value file usr local lib site packages django db backends utils py line in execute return self cursor execute sql params integrityerror null value in column location violates not null constraint detail failing row contains artistic f g unit k plot sector korangi industrial area kara pk null traceback most recent call last file usr local lib site packages django db backends utils py line in execute return self cursor execute sql params integrityerror null value in column location violates not null constraint detail failing row contains artistic f g unit k plot sector korangi industrial area kara pk null | 0 |
121,952 | 10,208,115,476 | IssuesEvent | 2019-08-14 09:21:22 | input-output-hk/chain-libs | https://api.github.com/repos/input-output-hk/chain-libs | opened | Update Proposal is not removed after expiry grace period (proposal_expiration) | bug test | UpdateState::process_proposals has bug in my opinion, which leads to problem in which proposal which was not accepted before proposal_expiration period won't be removed from proposal collection at all.
there are two conditions in mentioned method:
```
if prev_date.epoch < new_date.epoch
```
and
```
else if proposal_state.proposal_date.epoch + settings.proposal_expiration
> new_date.epoch
```
1)
if we apply proposal in the current epoch (let's say epoch 0 and our `proposal_expiration` is any value different than 0, then it's impossible to satisfy condition `proposal_state.proposal_date.epoch + settings.proposal_expiration < new_date.epoch` => ` 0 + 1 > 1 = false ` and then new_date.epoch will be increased i believe if we time proceeds, so above condition will be never met and therefore proposal will stay in collection forever).
2)
In different scenario in which we apply proposal for epoch in the future let's say epoch 5 (while we are at epoch 0) then if proposal won't be accepted in epoch 0, then it will be immediately removed from proposals in `process_proposals` . (5 + 0 > 0 = true)
To fix this situation we need to reverse condition:
```
else if proposal_state.proposal_date.epoch + settings.proposal_expiration
< new_date.epoch
```
This will solve first scenario in epoch 2 (0+ 1 < 2 = true) and in second scenario proposal won't be removed from proposal before its time will come (5 + 0 < 0 =false )
| 1.0 | Update Proposal is not removed after expiry grace period (proposal_expiration) - UpdateState::process_proposals has bug in my opinion, which leads to problem in which proposal which was not accepted before proposal_expiration period won't be removed from proposal collection at all.
there are two conditions in mentioned method:
```
if prev_date.epoch < new_date.epoch
```
and
```
else if proposal_state.proposal_date.epoch + settings.proposal_expiration
> new_date.epoch
```
1)
if we apply proposal in the current epoch (let's say epoch 0 and our `proposal_expiration` is any value different than 0, then it's impossible to satisfy condition `proposal_state.proposal_date.epoch + settings.proposal_expiration < new_date.epoch` => ` 0 + 1 > 1 = false ` and then new_date.epoch will be increased i believe if we time proceeds, so above condition will be never met and therefore proposal will stay in collection forever).
2)
In different scenario in which we apply proposal for epoch in the future let's say epoch 5 (while we are at epoch 0) then if proposal won't be accepted in epoch 0, then it will be immediately removed from proposals in `process_proposals` . (5 + 0 > 0 = true)
To fix this situation we need to reverse condition:
```
else if proposal_state.proposal_date.epoch + settings.proposal_expiration
< new_date.epoch
```
This will solve first scenario in epoch 2 (0+ 1 < 2 = true) and in second scenario proposal won't be removed from proposal before its time will come (5 + 0 < 0 =false )
| non_priority | update proposal is not removed after expiry grace period proposal expiration updatestate process proposals has bug in my opinion which leads to problem in which proposal which was not accepted before proposal expiration period won t be removed from proposal collection at all there are two conditions in mentioned method if prev date epoch new date epoch and else if proposal state proposal date epoch settings proposal expiration new date epoch if we apply proposal in the current epoch let s say epoch and our proposal expiration is any value different than then it s impossible to satisfy condition proposal state proposal date epoch settings proposal expiration false and then new date epoch will be increased i believe if we time proceeds so above condition will be never met and therefore proposal will stay in collection forever in different scenario in which we apply proposal for epoch in the future let s say epoch while we are at epoch then if proposal won t be accepted in epoch then it will be immediately removed from proposals in process proposals true to fix this situation we need to reverse condition else if proposal state proposal date epoch settings proposal expiration new date epoch this will solve first scenario in epoch true and in second scenario proposal won t be removed from proposal before its time will come false | 0 |
261,174 | 19,701,303,819 | IssuesEvent | 2022-01-12 16:53:15 | web-illinois/illinois_framework_theme | https://api.github.com/repos/web-illinois/illinois_framework_theme | closed | Document how to remove the /news archive listing page | documentation News | By default we list all of the news items in a view at /news, but not all sites will want to do this. We need to document that proper way to change this and still list news is to disable the view-page and embed the (to be created) view-block into a Content Page. | 1.0 | Document how to remove the /news archive listing page - By default we list all of the news items in a view at /news, but not all sites will want to do this. We need to document that proper way to change this and still list news is to disable the view-page and embed the (to be created) view-block into a Content Page. | non_priority | document how to remove the news archive listing page by default we list all of the news items in a view at news but not all sites will want to do this we need to document that proper way to change this and still list news is to disable the view page and embed the to be created view block into a content page | 0 |
9,769 | 25,166,879,073 | IssuesEvent | 2022-11-10 21:45:19 | MicrosoftDocs/architecture-center | https://api.github.com/repos/MicrosoftDocs/architecture-center | closed | Multiple Virtual WAN Hubs per region | doc-enhancement assigned-to-author triaged architecture-center/svc example-scenario/subsvc Pri2 | Hey Team,
I sent some feedback for the Azure VWAN FAQ page yesterday regarding the same thing. It looks like some of this documentation needs to be updated now that it's possible to provision multiple VWAN hubs per region.
In the section **"Virtual WAN Hub"** it states _"There can only be one hub per Azure region."_
If you reference the VWAN FAQ page, there is a section that says you can have multiple VWAN Hubs per region.
https://docs.microsoft.com/en-us/azure/virtual-wan/virtual-wan-faq#is-it-possible-to-create-multiple-virtual-wan-hubs-in-the-same-region
Thanks for all your work!
Christopher Melendez
Microsoft Azure CSA
xxxxxx
xxxxxx
xxxxxx
(Edited PII)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44ba53f7-4773-946c-fffc-cf51122973f8
* Version Independent ID: 802d7f2e-ee23-c3e1-0294-f3941d0c30c2
* Content: [Hub-spoke network topology with Azure Virtual WAN - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/networking/hub-spoke-vwan-architecture)
* Content Source: [docs/networking/hub-spoke-vwan-architecture.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/networking/hub-spoke-vwan-architecture.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @EdPrice-MSFT
* Microsoft Alias: **yemrea** | 1.0 | Multiple Virtual WAN Hubs per region - Hey Team,
I sent some feedback for the Azure VWAN FAQ page yesterday regarding the same thing. It looks like some of this documentation needs to be updated now that it's possible to provision multiple VWAN hubs per region.
In the section **"Virtual WAN Hub"** it states _"There can only be one hub per Azure region."_
If you reference the VWAN FAQ page, there is a section that says you can have multiple VWAN Hubs per region.
https://docs.microsoft.com/en-us/azure/virtual-wan/virtual-wan-faq#is-it-possible-to-create-multiple-virtual-wan-hubs-in-the-same-region
Thanks for all your work!
Christopher Melendez
Microsoft Azure CSA
xxxxxx
xxxxxx
xxxxxx
(Edited PII)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44ba53f7-4773-946c-fffc-cf51122973f8
* Version Independent ID: 802d7f2e-ee23-c3e1-0294-f3941d0c30c2
* Content: [Hub-spoke network topology with Azure Virtual WAN - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/networking/hub-spoke-vwan-architecture)
* Content Source: [docs/networking/hub-spoke-vwan-architecture.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/networking/hub-spoke-vwan-architecture.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @EdPrice-MSFT
* Microsoft Alias: **yemrea** | non_priority | multiple virtual wan hubs per region hey team i sent some feedback for the azure vwan faq page yesterday regarding the same thing it looks like some of this documentation needs to be updated now that it s possible to provision multiple vwan hubs per region in the section virtual wan hub it states there can only be one hub per azure region if you reference the vwan faq page there is a section that says you can have multiple vwan hubs per region thanks for all your work christopher melendez microsoft azure csa xxxxxx xxxxxx xxxxxx edited pii document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id fffc version independent id content content source service architecture center sub service example scenario github login edprice msft microsoft alias yemrea | 0 |
21,841 | 4,754,692,663 | IssuesEvent | 2016-10-24 08:16:14 | cra-ros-pkg/robot_localization | https://api.github.com/repos/cra-ros-pkg/robot_localization | closed | Example launch file for using multiple robot_localization ekf nodes | documentation | I have seen a lot of people writing about using multiple ekf nodes (and a navsat node) for odom & map localization. Would it be possible to put together an example launch file for configuring both of these nodes together? | 1.0 | Example launch file for using multiple robot_localization ekf nodes - I have seen a lot of people writing about using multiple ekf nodes (and a navsat node) for odom & map localization. Would it be possible to put together an example launch file for configuring both of these nodes together? | non_priority | example launch file for using multiple robot localization ekf nodes i have seen a lot of people writing about using multiple ekf nodes and a navsat node for odom map localization would it be possible to put together an example launch file for configuring both of these nodes together | 0 |
19,574 | 4,424,361,388 | IssuesEvent | 2016-08-16 12:16:02 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | Issue with SpringApplicationBuilder example in docs | documentation | Bug report
If you add SpringApplicationBuilder as per docs
```
new SpringApplicationBuilder()
.bannerMode(Banner.Mode.OFF)
.sources(Parent.class)
.child(Application.class)
.run(args);
```
This wont trigger banner off as it is trying to switch banner off on parent. It should be on child and not parent.
On other hand,
If we do like below,
```
new SpringApplicationBuilder()
.sources(Parent.class)
.child(Application.class)
.bannerMode(Banner.Mode.OFF)
.run(args);
```
It switches banner off.
Therefore, it should be documented properly somewhere as example is misleading and somewhere someone may get confused why banner is not switching off.
| 1.0 | Issue with SpringApplicationBuilder example in docs - Bug report
If you add SpringApplicationBuilder as per docs
```
new SpringApplicationBuilder()
.bannerMode(Banner.Mode.OFF)
.sources(Parent.class)
.child(Application.class)
.run(args);
```
This wont trigger banner off as it is trying to switch banner off on parent. It should be on child and not parent.
On other hand,
If we do like below,
```
new SpringApplicationBuilder()
.sources(Parent.class)
.child(Application.class)
.bannerMode(Banner.Mode.OFF)
.run(args);
```
It switches banner off.
Therefore, it should be documented properly somewhere as example is misleading and somewhere someone may get confused why banner is not switching off.
| non_priority | issue with springapplicationbuilder example in docs bug report if you add springapplicationbuilder as per docs new springapplicationbuilder bannermode banner mode off sources parent class child application class run args this wont trigger banner off as it is trying to switch banner off on parent it should be on child and not parent on other hand if we do like below new springapplicationbuilder sources parent class child application class bannermode banner mode off run args it switches banner off therefore it should be documented properly somewhere as example is misleading and somewhere someone may get confused why banner is not switching off | 0 |
83,312 | 10,340,520,085 | IssuesEvent | 2019-09-03 22:14:32 | MozillaReality/FirefoxReality | https://api.github.com/repos/MozillaReality/FirefoxReality | opened | Allow per-site use of location | Final Design PM/UX review enhancement | For sites that request location data, users should be able to grant permission once or always for that site. Users should also be able to revoke permission from sites they have allowed in the past.
Current behavior allows a site to ask for permissions multiple times in a single session, with no way for a user to say yes/no persistently for a site.
There's precedent and a model for this for Popup Blocking on #593 and the spec is here: https://trello.com/c/LserbZRw/428-uis-86-pop-up-blocking-override | 1.0 | Allow per-site use of location - For sites that request location data, users should be able to grant permission once or always for that site. Users should also be able to revoke permission from sites they have allowed in the past.
Current behavior allows a site to ask for permissions multiple times in a single session, with no way for a user to say yes/no persistently for a site.
There's precedent and a model for this for Popup Blocking on #593 and the spec is here: https://trello.com/c/LserbZRw/428-uis-86-pop-up-blocking-override | non_priority | allow per site use of location for sites that request location data users should be able to grant permission once or always for that site users should also be able to revoke permission from sites they have allowed in the past current behavior allows a site to ask for permissions multiple times in a single session with no way for a user to say yes no persistently for a site there s precedent and a model for this for popup blocking on and the spec is here | 0 |
127,537 | 10,474,462,913 | IssuesEvent | 2019-09-23 14:33:20 | jpmorganchase/tessera | https://api.github.com/repos/jpmorganchase/tessera | closed | Azure Key Vault - SSLPeerUnverifiedException when running disabled acceptance tests with jdk11 | 0.11 bug testing | AKV tests are currently disabled for jdk11 as Tessera throws a `javax.net.ssl.SSLPeerUnverifiedException: Hostname localhost not verified (no certificates)` when trying to communicate with the mock AKV (WireMock) server used in the tests.
Travis's `oraclejdk11` is currently `jdk11.0.2`. The test is reproducible locally when using `jdk11.0.2`. When using later versions (e.g. `jdk11.0.4`) the tests pass successfully.
This is a recognised issue with okhttp (used by the AKV client) and `jdk11.0.2` (https://github.com/square/okhttp/issue s/4703). For further info see https://bugs.openjdk.java.net/browse/JDK-8211806 and https://bugs.openjdk.java.net/browse/JDK-8212885.
Manual testing verifies that the AKV functionality works as expected in jdk11.0.2 so this appears to be an issue only when using the mock server/self-signed certs in the acceptance tests.
Potential solutions include:
* Investigate the TLS handshake between the AKV client and WireMock server (see openjdk issues linked above)
* Wait for Travis to update to use a newer version of jdk11
* Manually install a later version of jdk11 for the AKV tests
* Try with a different mock server
| 1.0 | Azure Key Vault - SSLPeerUnverifiedException when running disabled acceptance tests with jdk11 - AKV tests are currently disabled for jdk11 as Tessera throws a `javax.net.ssl.SSLPeerUnverifiedException: Hostname localhost not verified (no certificates)` when trying to communicate with the mock AKV (WireMock) server used in the tests.
Travis's `oraclejdk11` is currently `jdk11.0.2`. The test is reproducible locally when using `jdk11.0.2`. When using later versions (e.g. `jdk11.0.4`) the tests pass successfully.
This is a recognised issue with okhttp (used by the AKV client) and `jdk11.0.2` (https://github.com/square/okhttp/issue s/4703). For further info see https://bugs.openjdk.java.net/browse/JDK-8211806 and https://bugs.openjdk.java.net/browse/JDK-8212885.
Manual testing verifies that the AKV functionality works as expected in jdk11.0.2 so this appears to be an issue only when using the mock server/self-signed certs in the acceptance tests.
Potential solutions include:
* Investigate the TLS handshake between the AKV client and WireMock server (see openjdk issues linked above)
* Wait for Travis to update to use a newer version of jdk11
* Manually install a later version of jdk11 for the AKV tests
* Try with a different mock server
| non_priority | azure key vault sslpeerunverifiedexception when running disabled acceptance tests with akv tests are currently disabled for as tessera throws a javax net ssl sslpeerunverifiedexception hostname localhost not verified no certificates when trying to communicate with the mock akv wiremock server used in the tests travis s is currently the test is reproducible locally when using when using later versions e g the tests pass successfully this is a recognised issue with okhttp used by the akv client and s for further info see and manual testing verifies that the akv functionality works as expected in so this appears to be an issue only when using the mock server self signed certs in the acceptance tests potential solutions include investigate the tls handshake between the akv client and wiremock server see openjdk issues linked above wait for travis to update to use a newer version of manually install a later version of for the akv tests try with a different mock server | 0 |
114,591 | 14,601,003,567 | IssuesEvent | 2020-12-21 07:56:27 | AtB-AS/mittatb-app | https://api.github.com/repos/AtB-AS/mittatb-app | reopened | [Designsync] Update journey details | Designsync | ## Origin
_Links to received feedback, user research or other findings._
## Motivation
_A short description of what user needs or business goals this feature will solve._
## Hypotheses and assumptions
_A list of hypotheses and assumptions we have made about the user or the proposed solution._
## Proposed solution
_A coarse description of a proposed solution, that may include wireframes and graphic design._
Figma / App 1.0 / Assistant: https://www.figma.com/file/WsvD8b5PnUwvvRJiyRe6I9/App-1.0?node-id=963%3A0
Figma / App 1.0 / Departures: https://www.figma.com/file/WsvD8b5PnUwvvRJiyRe6I9/App-1.0?node-id=963%3A1990
FIgma / Components 1.0 / List-items: https://www.figma.com/file/2QTjAdekdIPuLFovQhVrY3/Components-1.0?node-id=547%3A6037
FIgma / Components 1.0 / List-groups: https://www.figma.com/file/2QTjAdekdIPuLFovQhVrY3/Components-1.0?node-id=547%3A6287

| 1.0 | [Designsync] Update journey details - ## Origin
_Links to received feedback, user research or other findings._
## Motivation
_A short description of what user needs or business goals this feature will solve._
## Hypotheses and assumptions
_A list of hypotheses and assumptions we have made about the user or the proposed solution._
## Proposed solution
_A coarse description of a proposed solution, that may include wireframes and graphic design._
Figma / App 1.0 / Assistant: https://www.figma.com/file/WsvD8b5PnUwvvRJiyRe6I9/App-1.0?node-id=963%3A0
Figma / App 1.0 / Departures: https://www.figma.com/file/WsvD8b5PnUwvvRJiyRe6I9/App-1.0?node-id=963%3A1990
FIgma / Components 1.0 / List-items: https://www.figma.com/file/2QTjAdekdIPuLFovQhVrY3/Components-1.0?node-id=547%3A6037
FIgma / Components 1.0 / List-groups: https://www.figma.com/file/2QTjAdekdIPuLFovQhVrY3/Components-1.0?node-id=547%3A6287

| non_priority | update journey details origin links to received feedback user research or other findings motivation a short description of what user needs or business goals this feature will solve hypotheses and assumptions a list of hypotheses and assumptions we have made about the user or the proposed solution proposed solution a coarse description of a proposed solution that may include wireframes and graphic design figma app assistant figma app departures figma components list items figma components list groups | 0 |
63,494 | 15,611,020,507 | IssuesEvent | 2021-03-19 13:52:13 | openego/eGon-data | https://api.github.com/repos/openego/eGon-data | opened | Deal with residential heat demand in cells without zensus population | :building_construction: integration | There are some ha-cells with a residential heat demand but without a zensus population due to different datasets and methods.
This results in issues, e.g. when creating heat demand time series based on the number of buildings from zensus.
Until the methods from Peta can be integrated and directly use zensus population data, we will delete the residential heat demands in cells without zensus population and scale the demands in populated cells to meet the target values. | 1.0 | Deal with residential heat demand in cells without zensus population - There are some ha-cells with a residential heat demand but without a zensus population due to different datasets and methods.
This results in issues, e.g. when creating heat demand time series based on the number of buildings from zensus.
Until the methods from Peta can be integrated and directly use zensus population data, we will delete the residential heat demands in cells without zensus population and scale the demands in populated cells to meet the target values. | non_priority | deal with residential heat demand in cells without zensus population there are some ha cells with a residential heat demand but without a zensus population due to different datasets and methods this results in issues e g when creating heat demand time series based on the number of buildings from zensus until the methods from peta can be integrated and directly use zensus population data we will delete the residential heat demands in cells without zensus population and scale the demands in populated cells to meet the target values | 0 |
24,810 | 5,104,698,490 | IssuesEvent | 2017-01-05 02:38:28 | App-vNext/Polly | https://api.github.com/repos/App-vNext/Polly | closed | Polly v5.0 is now in alpha on Nuget | documentation on-nuget-in-alpha v5.0-alpha ready | Per the [project board](https://github.com/App-vNext/Polly/projects/1), [PolicyWrap](https://github.com/App-vNext/Polly/issues/140), [ExecutionKeys](https://github.com/App-vNext/Polly/issues/139), [Fallback policy](https://github.com/App-vNext/Polly/issues/80), [Bulkhead isolation](https://github.com/App-vNext/Polly/issues/138) and [Timeout policy](https://github.com/App-vNext/Polly/issues/137) are now all in alpha, delivered into the [v5.0-alpha branch](https://github.com/App-vNext/Polly/tree/v5.0-alpha).
This issue signals our intention to make an alpha-release of this material to Nuget in week-beginning 24 October (or earlier).
Main pre-requisite: documentation of the new policies (underway) in both [readme](https://github.com/App-vNext/Polly/blob/v5.0-alpha/README.md) and [wiki](https://github.com/App-vNext/Polly/wiki).
| 1.0 | Polly v5.0 is now in alpha on Nuget - Per the [project board](https://github.com/App-vNext/Polly/projects/1), [PolicyWrap](https://github.com/App-vNext/Polly/issues/140), [ExecutionKeys](https://github.com/App-vNext/Polly/issues/139), [Fallback policy](https://github.com/App-vNext/Polly/issues/80), [Bulkhead isolation](https://github.com/App-vNext/Polly/issues/138) and [Timeout policy](https://github.com/App-vNext/Polly/issues/137) are now all in alpha, delivered into the [v5.0-alpha branch](https://github.com/App-vNext/Polly/tree/v5.0-alpha).
This issue signals our intention to make an alpha-release of this material to Nuget in week-beginning 24 October (or earlier).
Main pre-requisite: documentation of the new policies (underway) in both [readme](https://github.com/App-vNext/Polly/blob/v5.0-alpha/README.md) and [wiki](https://github.com/App-vNext/Polly/wiki).
| non_priority | polly is now in alpha on nuget per the and are now all in alpha delivered into the this issue signals our intention to make an alpha release of this material to nuget in week beginning october or earlier main pre requisite documentation of the new policies underway in both and | 0 |
74,091 | 15,304,358,978 | IssuesEvent | 2021-02-24 16:49:08 | liannoi/wlodzimierz | https://api.github.com/repos/liannoi/wlodzimierz | opened | WS-2020-0163 (Medium) detected in marked-0.7.0.tgz | security vulnerability | ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: wlodzimierz/src/Clients/web-spa/package.json</p>
<p>Path to vulnerable library: wlodzimierz/src/Clients/web-spa/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- firebase-tools-8.20.0.tgz (Root Library)
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liannoi/wlodzimierz/commit/5dce9501516f7b5db5bfd0b2a6f86de7769059fb">5dce9501516f7b5db5bfd0b2a6f86de7769059fb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0163 (Medium) detected in marked-0.7.0.tgz - ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: wlodzimierz/src/Clients/web-spa/package.json</p>
<p>Path to vulnerable library: wlodzimierz/src/Clients/web-spa/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- firebase-tools-8.20.0.tgz (Root Library)
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liannoi/wlodzimierz/commit/5dce9501516f7b5db5bfd0b2a6f86de7769059fb">5dce9501516f7b5db5bfd0b2a6f86de7769059fb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in marked tgz ws medium severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file wlodzimierz src clients web spa package json path to vulnerable library wlodzimierz src clients web spa node modules marked package json dependency hierarchy firebase tools tgz root library x marked tgz vulnerable library found in head commit a href found in base branch main vulnerability details marked before is vulnerable to regular expression denial of service redos rules js have multiple unused capture groups which can lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource | 0 |
236,849 | 26,072,271,899 | IssuesEvent | 2022-12-24 01:09:41 | EcommEasy/EcommEasy | https://api.github.com/repos/EcommEasy/EcommEasy | opened | CVE-2022-23540 (Medium) detected in jsonwebtoken-8.5.1.tgz | security vulnerability | ## CVE-2022-23540 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonwebtoken-8.5.1.tgz</b></p></summary>
<p>JSON Web Token implementation (symmetric and asymmetric)</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz">https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz</a></p>
<p>Path to dependency file: /EcommEasy/package.json</p>
<p>Path to vulnerable library: /node_modules/jsonwebtoken/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jsonwebtoken-8.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EcommEasy/EcommEasy/commit/363b3c5c1efcb2a7265f2d259bed12d00efb92c4">363b3c5c1efcb2a7265f2d259bed12d00efb92c4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In versions `<=8.5.1` of `jsonwebtoken` library, lack of algorithm definition in the `jwt.verify()` function can lead to signature validation bypass due to defaulting to the `none` algorithm for signature verification. Users are affected if you do not specify algorithms in the `jwt.verify()` function. This issue has been fixed, please update to version 9.0.0 which removes the default support for the none algorithm in the `jwt.verify()` method. There will be no impact, if you update to version 9.0.0 and you don’t need to allow for the `none` algorithm. If you need 'none' algorithm, you have to explicitly specify that in `jwt.verify()` options.
<p>Publish Date: 2022-12-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23540>CVE-2022-23540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23540">https://www.cve.org/CVERecord?id=CVE-2022-23540</a></p>
<p>Release Date: 2022-12-22</p>
<p>Fix Resolution: jsonwebtoken - 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23540 (Medium) detected in jsonwebtoken-8.5.1.tgz - ## CVE-2022-23540 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonwebtoken-8.5.1.tgz</b></p></summary>
<p>JSON Web Token implementation (symmetric and asymmetric)</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz">https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz</a></p>
<p>Path to dependency file: /EcommEasy/package.json</p>
<p>Path to vulnerable library: /node_modules/jsonwebtoken/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jsonwebtoken-8.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/EcommEasy/EcommEasy/commit/363b3c5c1efcb2a7265f2d259bed12d00efb92c4">363b3c5c1efcb2a7265f2d259bed12d00efb92c4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In versions `<=8.5.1` of `jsonwebtoken` library, lack of algorithm definition in the `jwt.verify()` function can lead to signature validation bypass due to defaulting to the `none` algorithm for signature verification. Users are affected if you do not specify algorithms in the `jwt.verify()` function. This issue has been fixed, please update to version 9.0.0 which removes the default support for the none algorithm in the `jwt.verify()` method. There will be no impact, if you update to version 9.0.0 and you don’t need to allow for the `none` algorithm. If you need 'none' algorithm, you have to explicitly specify that in `jwt.verify()` options.
<p>Publish Date: 2022-12-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23540>CVE-2022-23540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23540">https://www.cve.org/CVERecord?id=CVE-2022-23540</a></p>
<p>Release Date: 2022-12-22</p>
<p>Fix Resolution: jsonwebtoken - 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jsonwebtoken tgz cve medium severity vulnerability vulnerable library jsonwebtoken tgz json web token implementation symmetric and asymmetric library home page a href path to dependency file ecommeasy package json path to vulnerable library node modules jsonwebtoken package json dependency hierarchy x jsonwebtoken tgz vulnerable library found in head commit a href vulnerability details in versions of jsonwebtoken library lack of algorithm definition in the jwt verify function can lead to signature validation bypass due to defaulting to the none algorithm for signature verification users are affected if you do not specify algorithms in the jwt verify function this issue has been fixed please update to version which removes the default support for the none algorithm in the jwt verify method there will be no impact if you update to version and you don’t need to allow for the none algorithm if you need none algorithm you have to explicitly specify that in jwt verify options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsonwebtoken step up your open source security game with mend | 0 |
436,484 | 30,554,062,414 | IssuesEvent | 2023-07-20 10:25:59 | quarkusio/quarkus | https://api.github.com/repos/quarkusio/quarkus | closed | Failed container 'unable to mount a file' error when running Quarkus local build Vale linter (DVale) on RHEL | kind/bug area/documentation | ### Describe the bug
Problem:
Container error occurs on RHEL OS when running any of the following Vale commands locally in the Quarkus git repo:
````
./mvnw -f docs test -Dvale -DvaleLevel=suggestion
./mvnw -f docs test -Dvale=git -DvaleLevel=warning
./mvnw -f docs test -Dvale='doc-.*' -DvaleLevel=error
````
2022-11-07 18:45:57,445 ERROR [🐳 .io/.15.5]] (main) Log output from the failed container:
````
2022-11-07 18:45:56,654 WARN [🐳 .io/.15.5]] (main) Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
2022-11-07 18:45:56,681 INFO [🐳 .io/.15.5]] (main) Container docker.io/jdkato/vale:v2.15.5 is starting: d6305b094543c47cc91cb18994a10f93da493e6db208abf3f308a0520a5c8df4
2022-11-07 18:45:57,351 ERROR [🐳 .io/.15.5]] (main) Could not start container: java.lang.IllegalStateException: Container did not start correctly.
....
{
"Code": "E100",
"Text": "E100 [--config] Runtime error\n\npath '/vale/vale.ini' does not exist\n\nExecution stopped with code 1."
}
````


### Expected behavior
The Vale linter runs and checks Quarkus doc (asciidoc) content for Quarkus style rules as configured in the .vale.ini
### Actual behavior
2022-11-07 18:45:57,445 ERROR [🐳 .io/.15.5]] (main) Log output from the failed container:
````
2022-11-07 18:45:56,654 WARN [🐳 .io/.15.5]] (main) Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
2022-11-07 18:45:56,681 INFO [🐳 .io/.15.5]] (main) Container docker.io/jdkato/vale:v2.15.5 is starting: d6305b094543c47cc91cb18994a10f93da493e6db208abf3f308a0520a5c8df4
2022-11-07 18:45:57,351 ERROR [🐳 .io/.15.5]] (main) Could not start container: java.lang.IllegalStateException: Container did not start correctly.
....
{
"Code": "E100",
"Text": "E100 [--config] Runtime error\n\npath '/vale/vale.ini' does not exist\n\nExecution stopped with code 1."
}
````


### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
Linux mpurcell.remote.csb 4.18.0-409.el8.x86_64 #1 SMP Tue Jul 12 00:42:37 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
### Output of `java -version`
openjdk version "11.0.16" 2022-07-19 LTS OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-1.el8_6) (build 11.0.16+8-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.16.0.8-1.el8_6) (build 11.0.16+8-LTS, mixed mode, sharing)
### GraalVM version (if different from Java)
N/A
### Quarkus version or git rev
main
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
Maven home: /usr/share/maven Java version: 17.0.4, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-17-openjdk-17.0.4.0.8-2.el8_6.x86_64 Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.18.0-409.el8.x86_64", arch: "amd64", family: "unix"
### Additional information
_No response_ | 1.0 | Failed container 'unable to mount a file' error when running Quarkus local build Vale linter (DVale) on RHEL - ### Describe the bug
Problem:
Container error occurs on RHEL OS when running any of the following Vale commands locally in the Quarkus git repo:
````
./mvnw -f docs test -Dvale -DvaleLevel=suggestion
./mvnw -f docs test -Dvale=git -DvaleLevel=warning
./mvnw -f docs test -Dvale='doc-.*' -DvaleLevel=error
````
2022-11-07 18:45:57,445 ERROR [🐳 .io/.15.5]] (main) Log output from the failed container:
````
2022-11-07 18:45:56,654 WARN [🐳 .io/.15.5]] (main) Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
2022-11-07 18:45:56,681 INFO [🐳 .io/.15.5]] (main) Container docker.io/jdkato/vale:v2.15.5 is starting: d6305b094543c47cc91cb18994a10f93da493e6db208abf3f308a0520a5c8df4
2022-11-07 18:45:57,351 ERROR [🐳 .io/.15.5]] (main) Could not start container: java.lang.IllegalStateException: Container did not start correctly.
....
{
"Code": "E100",
"Text": "E100 [--config] Runtime error\n\npath '/vale/vale.ini' does not exist\n\nExecution stopped with code 1."
}
````


### Expected behavior
The Vale linter runs and checks Quarkus doc (asciidoc) content for Quarkus style rules as configured in the .vale.ini
### Actual behavior
2022-11-07 18:45:57,445 ERROR [🐳 .io/.15.5]] (main) Log output from the failed container:
````
2022-11-07 18:45:56,654 WARN [🐳 .io/.15.5]] (main) Unable to mount a file from test host into a running container. This may be a misconfiguration or limitation of your Docker environment. Some features might not work.
2022-11-07 18:45:56,681 INFO [🐳 .io/.15.5]] (main) Container docker.io/jdkato/vale:v2.15.5 is starting: d6305b094543c47cc91cb18994a10f93da493e6db208abf3f308a0520a5c8df4
2022-11-07 18:45:57,351 ERROR [🐳 .io/.15.5]] (main) Could not start container: java.lang.IllegalStateException: Container did not start correctly.
....
{
"Code": "E100",
"Text": "E100 [--config] Runtime error\n\npath '/vale/vale.ini' does not exist\n\nExecution stopped with code 1."
}
````


### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
Linux mpurcell.remote.csb 4.18.0-409.el8.x86_64 #1 SMP Tue Jul 12 00:42:37 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
### Output of `java -version`
openjdk version "11.0.16" 2022-07-19 LTS OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-1.el8_6) (build 11.0.16+8-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.16.0.8-1.el8_6) (build 11.0.16+8-LTS, mixed mode, sharing)
### GraalVM version (if different from Java)
N/A
### Quarkus version or git rev
main
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
Maven home: /usr/share/maven Java version: 17.0.4, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-17-openjdk-17.0.4.0.8-2.el8_6.x86_64 Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.18.0-409.el8.x86_64", arch: "amd64", family: "unix"
### Additional information
_No response_ | non_priority | failed container unable to mount a file error when running quarkus local build vale linter dvale on rhel describe the bug problem container error occurs on rhel os when running any of the following vale commands locally in the quarkus git repo mvnw f docs test dvale dvalelevel suggestion mvnw f docs test dvale git dvalelevel warning mvnw f docs test dvale doc dvalelevel error error main log output from the failed container warn main unable to mount a file from test host into a running container this may be a misconfiguration or limitation of your docker environment some features might not work info main container docker io jdkato vale is starting error main could not start container java lang illegalstateexception container did not start correctly code text runtime error n npath vale vale ini does not exist n nexecution stopped with code expected behavior the vale linter runs and checks quarkus doc asciidoc content for quarkus style rules as configured in the vale ini actual behavior error main log output from the failed container warn main unable to mount a file from test host into a running container this may be a misconfiguration or limitation of your docker environment some features might not work info main container docker io jdkato vale is starting error main could not start container java lang illegalstateexception container did not start correctly code text runtime error n npath vale vale ini does not exist n nexecution stopped with code how to reproduce no response output of uname a or ver linux mpurcell remote csb smp tue jul edt gnu linux output of java version openjdk version lts openjdk runtime environment red hat build lts openjdk bit server vm red hat build lts mixed mode sharing graalvm version if different from java n a quarkus version or git rev main build tool ie output of mvnw version or gradlew version maven home usr share maven java version vendor red hat inc runtime usr lib jvm java openjdk default locale en us platform encoding utf os name linux version arch family unix additional information no response | 0 |
151,855 | 19,665,435,085 | IssuesEvent | 2022-01-10 21:53:31 | tyhal/crie | https://api.github.com/repos/tyhal/crie | closed | CVE-2019-11840 (Medium) detected in github.com/moby/moby-v1.13.1 | security vulnerability | ## CVE-2019-11840 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/moby/moby-v1.13.1</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- :x: **github.com/moby/moby-v1.13.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tyhal/crie/commit/304e2783e903eb495b8bc99cd892d467bea7f95a">304e2783e903eb495b8bc99cd892d467bea7f95a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in supplementary Go cryptography libraries, aka golang-googlecode-go-crypto, before 2019-03-20. A flaw was found in the amd64 implementation of golang.org/x/crypto/salsa20 and golang.org/x/crypto/salsa20/salsa. If more than 256 GiB of keystream is generated, or if the counter otherwise grows greater than 32 bits, the amd64 implementation will first generate incorrect output, and then cycle back to previously generated keystream. Repeated keystream bytes can lead to loss of confidentiality in encryption applications, or to predictability in CSPRNG applications.
<p>Publish Date: 2019-05-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11840>CVE-2019-11840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://go-review.googlesource.com/c/crypto/+/168406/">https://go-review.googlesource.com/c/crypto/+/168406/</a></p>
<p>Release Date: 2019-05-09</p>
<p>Fix Resolution: commit b7391e95e576cacdcdd422573063bc057239113d</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-11840 (Medium) detected in github.com/moby/moby-v1.13.1 - ## CVE-2019-11840 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/moby/moby-v1.13.1</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- :x: **github.com/moby/moby-v1.13.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tyhal/crie/commit/304e2783e903eb495b8bc99cd892d467bea7f95a">304e2783e903eb495b8bc99cd892d467bea7f95a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in supplementary Go cryptography libraries, aka golang-googlecode-go-crypto, before 2019-03-20. A flaw was found in the amd64 implementation of golang.org/x/crypto/salsa20 and golang.org/x/crypto/salsa20/salsa. If more than 256 GiB of keystream is generated, or if the counter otherwise grows greater than 32 bits, the amd64 implementation will first generate incorrect output, and then cycle back to previously generated keystream. Repeated keystream bytes can lead to loss of confidentiality in encryption applications, or to predictability in CSPRNG applications.
<p>Publish Date: 2019-05-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11840>CVE-2019-11840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://go-review.googlesource.com/c/crypto/+/168406/">https://go-review.googlesource.com/c/crypto/+/168406/</a></p>
<p>Release Date: 2019-05-09</p>
<p>Fix Resolution: commit b7391e95e576cacdcdd422573063bc057239113d</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in github com moby moby cve medium severity vulnerability vulnerable library github com moby moby moby project a collaborative project for the container ecosystem to assemble container based systems dependency hierarchy x github com moby moby vulnerable library found in head commit a href vulnerability details an issue was discovered in supplementary go cryptography libraries aka golang googlecode go crypto before a flaw was found in the implementation of golang org x crypto and golang org x crypto salsa if more than gib of keystream is generated or if the counter otherwise grows greater than bits the implementation will first generate incorrect output and then cycle back to previously generated keystream repeated keystream bytes can lead to loss of confidentiality in encryption applications or to predictability in csprng applications publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commit step up your open source security game with whitesource | 0 |
48,151 | 7,382,230,020 | IssuesEvent | 2018-03-15 03:27:44 | factor/factor | https://api.github.com/repos/factor/factor | closed | help: auto-generate $values, $description for errors | documentation help markdown patch | ```factor
HELP: ordinary-word-missing-section
{ $values { "missing-section" string } { "word-name" string } }
{ $description "Throws an " { $link ordinary-word-missing-section } " error." }
{ $error-description "Thrown when an ordinary word's documentation is missing a required section." } ;
```
```factor
HELP: no-method
{ $values { "object" object } { "generic" "a generic word" } }
{ $description "Throws a " { $link no-method } " error." }
{ $error-description "Thrown by the " { $snippet "generic" } " word to indicate it does not have a method for the class of " { $snippet "object" } "." } ;
```
The top two elements, `$values` and `$description`, should be auto-generated based on the word's definition by the help system, like the help system can autogenerate `$values` for ordinary words. (surprisingly, they aren't autogenerated)
The `$description` element will always say "Throws an **error** error.", but if it needs to be different then it should be explicit.
That way, all we as docs writers have to do is write an `$error-description` which is actually error-specific. | 1.0 | help: auto-generate $values, $description for errors - ```factor
HELP: ordinary-word-missing-section
{ $values { "missing-section" string } { "word-name" string } }
{ $description "Throws an " { $link ordinary-word-missing-section } " error." }
{ $error-description "Thrown when an ordinary word's documentation is missing a required section." } ;
```
```factor
HELP: no-method
{ $values { "object" object } { "generic" "a generic word" } }
{ $description "Throws a " { $link no-method } " error." }
{ $error-description "Thrown by the " { $snippet "generic" } " word to indicate it does not have a method for the class of " { $snippet "object" } "." } ;
```
The top two elements, `$values` and `$description`, should be auto-generated based on the word's definition by the help system, like the help system can autogenerate `$values` for ordinary words. (surprisingly, they aren't autogenerated)
The `$description` element will always say "Throws an **error** error.", but if it needs to be different then it should be explicit.
That way, all we as docs writers have to do is write an `$error-description` which is actually error-specific. | non_priority | help auto generate values description for errors factor help ordinary word missing section values missing section string word name string description throws an link ordinary word missing section error error description thrown when an ordinary word s documentation is missing a required section factor help no method values object object generic a generic word description throws a link no method error error description thrown by the snippet generic word to indicate it does not have a method for the class of snippet object the top two elements values and description should be auto generated based on the word s definition by the help system like the help system can autogenerate values for ordinary words surprisingly they aren t autogenerated the description element will always say throws an error error but if it needs to be different then it should be explicit that way all we as docs writers have to do is write an error description which is actually error specific | 0 |
55,376 | 23,466,320,646 | IssuesEvent | 2022-08-16 17:07:04 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Handle painless failures with data annotations, without shard failures | Feature:Scripted Fields enhancement loe:hours Team:VisEditors Team:AppServicesSv impact:low | (issue first reported to ES as https://github.com/elastic/elasticsearch/issues/41393 but I understand it is unlikely to be processed there, so I raise narrowed incarnation here)
My main painful point of using kibana scripted fields is that of error handling. I forgot to check for field presence, I tried accessing text field via fields aray, I called non-existing, function,… → I get dreaded "N shards failed" message and have to hunt elasticsearch logs for java stacks. Brr.
It would be great if I could opt to just get the results instead, and have errors stored inside them (for example as _painless_script_failed: "… error message …"). Then the problem description would be easily available, and the data which triggers the error could be simply seen in kibana
In fact I ended up doing that myself, by wrapping all my scripted fields with try/catch constructs doing exactly that – but such boilerplate code doesn't seem a good idea in light field intended to be sometimes written or patched even by less technical users.
UI-wide I hope for sth like
```
In case of errors :
( ) fail shard processing
(*) mark record with tag [ _scripted_field_abcdef_failed ]
[x] and store error diagnostics in field [ _painless_error_message ]
```
on Scripted Field pane, which would be implemented simply by wrapping entered text with additional try/catch implementing this behaviour (wrapping when scripted field is appended to the query).
(see also https://github.com/elastic/elasticsearch/issues/41393 for some examples of errors I faced and some more notes) | 1.0 | Handle painless failures with data annotations, without shard failures - (issue first reported to ES as https://github.com/elastic/elasticsearch/issues/41393 but I understand it is unlikely to be processed there, so I raise narrowed incarnation here)
My main painful point of using kibana scripted fields is that of error handling. I forgot to check for field presence, I tried accessing text field via fields aray, I called non-existing, function,… → I get dreaded "N shards failed" message and have to hunt elasticsearch logs for java stacks. Brr.
It would be great if I could opt to just get the results instead, and have errors stored inside them (for example as _painless_script_failed: "… error message …"). Then the problem description would be easily available, and the data which triggers the error could be simply seen in kibana
In fact I ended up doing that myself, by wrapping all my scripted fields with try/catch constructs doing exactly that – but such boilerplate code doesn't seem a good idea in light field intended to be sometimes written or patched even by less technical users.
UI-wide I hope for sth like
```
In case of errors :
( ) fail shard processing
(*) mark record with tag [ _scripted_field_abcdef_failed ]
[x] and store error diagnostics in field [ _painless_error_message ]
```
on Scripted Field pane, which would be implemented simply by wrapping entered text with additional try/catch implementing this behaviour (wrapping when scripted field is appended to the query).
(see also https://github.com/elastic/elasticsearch/issues/41393 for some examples of errors I faced and some more notes) | non_priority | handle painless failures with data annotations without shard failures issue first reported to es as but i understand it is unlikely to be processed there so i raise narrowed incarnation here my main painful point of using kibana scripted fields is that of error handling i forgot to check for field presence i tried accessing text field via fields aray i called non existing function … → i get dreaded n shards failed message and have to hunt elasticsearch logs for java stacks brr it would be great if i could opt to just get the results instead and have errors stored inside them for example as painless script failed … error message … then the problem description would be easily available and the data which triggers the error could be simply seen in kibana in fact i ended up doing that myself by wrapping all my scripted fields with try catch constructs doing exactly that – but such boilerplate code doesn t seem a good idea in light field intended to be sometimes written or patched even by less technical users ui wide i hope for sth like in case of errors fail shard processing mark record with tag and store error diagnostics in field on scripted field pane which would be implemented simply by wrapping entered text with additional try catch implementing this behaviour wrapping when scripted field is appended to the query see also for some examples of errors i faced and some more notes | 0 |
172,978 | 21,088,954,179 | IssuesEvent | 2022-04-04 01:03:20 | t2kx/juice-shop | https://api.github.com/repos/t2kx/juice-shop | opened | CVE-2022-23308 (High) detected in src73.0.3677.0 | security vulnerability | ## CVE-2022-23308 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>src73.0.3677.0</b></p></summary>
<p>
<p>Library home page: <a href=https://chromium.googlesource.com/chromium/src>https://chromium.googlesource.com/chromium/src</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/valid.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/valid.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes.
<p>Publish Date: 2022-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23308>CVE-2022-23308</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://mail.gnome.org/archives/xml/2022-February/msg00015.html">https://mail.gnome.org/archives/xml/2022-February/msg00015.html</a></p>
<p>Release Date: 2022-02-26</p>
<p>Fix Resolution: v2.9.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23308 (High) detected in src73.0.3677.0 - ## CVE-2022-23308 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>src73.0.3677.0</b></p></summary>
<p>
<p>Library home page: <a href=https://chromium.googlesource.com/chromium/src>https://chromium.googlesource.com/chromium/src</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/valid.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/valid.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
valid.c in libxml2 before 2.9.13 has a use-after-free of ID and IDREF attributes.
<p>Publish Date: 2022-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23308>CVE-2022-23308</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://mail.gnome.org/archives/xml/2022-February/msg00015.html">https://mail.gnome.org/archives/xml/2022-February/msg00015.html</a></p>
<p>Release Date: 2022-02-26</p>
<p>Fix Resolution: v2.9.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in cve high severity vulnerability vulnerable library library home page a href found in base branch master vulnerable source files node modules vendor libxml valid c node modules vendor libxml valid c vulnerability details valid c in before has a use after free of id and idref attributes publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
96,275 | 19,977,883,953 | IssuesEvent | 2022-01-29 11:54:17 | ourjapanlife/findadoc-frontend | https://api.github.com/repos/ourjapanlife/findadoc-frontend | closed | Figure out sporadic eslint issue | help wanted code cleanup Hacktoberfest | Eslint occasionally fails in CI. For example see the #203 PR
This thread might be useful: https://github.com/yarnpkg/yarn/issues/7212 | 1.0 | Figure out sporadic eslint issue - Eslint occasionally fails in CI. For example see the #203 PR
This thread might be useful: https://github.com/yarnpkg/yarn/issues/7212 | non_priority | figure out sporadic eslint issue eslint occasionally fails in ci for example see the pr this thread might be useful | 0 |
236,539 | 26,019,564,151 | IssuesEvent | 2022-12-21 11:23:52 | SASTREPO/third | https://api.github.com/repos/SASTREPO/third | opened | CVE-2020-26217 (High) detected in xstream-1.4.5.jar | security vulnerability | ## CVE-2020-26217 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://codehaus.org/xstream-parent/xstream/">http://codehaus.org/xstream-parent/xstream/</a></p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SASTREPO/third/commit/52fbaa15eab0b8d33e9816e41dd942396306ad0a">52fbaa15eab0b8d33e9816e41dd942396306ad0a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream before version 1.4.14 is vulnerable to Remote Code Execution.The vulnerability may allow a remote attacker to run arbitrary shell commands only by manipulating the processed input stream. Only users who rely on blocklists are affected. Anyone using XStream's Security Framework allowlist is not affected. The linked advisory provides code workarounds for users who cannot upgrade. The issue is fixed in version 1.4.14.
<p>Publish Date: 2020-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-26217>CVE-2020-26217</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-mw36-7c6c-q4q2">https://github.com/x-stream/xstream/security/advisories/GHSA-mw36-7c6c-q4q2</a></p>
<p>Release Date: 2020-11-16</p>
<p>Fix Resolution: 1.4.13-java7</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2020-26217 (High) detected in xstream-1.4.5.jar - ## CVE-2020-26217 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://codehaus.org/xstream-parent/xstream/">http://codehaus.org/xstream-parent/xstream/</a></p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SASTREPO/third/commit/52fbaa15eab0b8d33e9816e41dd942396306ad0a">52fbaa15eab0b8d33e9816e41dd942396306ad0a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream before version 1.4.14 is vulnerable to Remote Code Execution.The vulnerability may allow a remote attacker to run arbitrary shell commands only by manipulating the processed input stream. Only users who rely on blocklists are affected. Anyone using XStream's Security Framework allowlist is not affected. The linked advisory provides code workarounds for users who cannot upgrade. The issue is fixed in version 1.4.14.
<p>Publish Date: 2020-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-26217>CVE-2020-26217</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-mw36-7c6c-q4q2">https://github.com/x-stream/xstream/security/advisories/GHSA-mw36-7c6c-q4q2</a></p>
<p>Release Date: 2020-11-16</p>
<p>Fix Resolution: 1.4.13-java7</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_priority | cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file webgoat server pom xml path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch main vulnerability details xstream before version is vulnerable to remote code execution the vulnerability may allow a remote attacker to run arbitrary shell commands only by manipulating the processed input stream only users who rely on blocklists are affected anyone using xstream s security framework allowlist is not affected the linked advisory provides code workarounds for users who cannot upgrade the issue is fixed in version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr | 0 |
3,016 | 2,731,076,488 | IssuesEvent | 2015-04-16 18:15:24 | OpenSourcePolicyCenter/Tax-Calculator | https://api.github.com/repos/OpenSourcePolicyCenter/Tax-Calculator | opened | SS_Income_c is misidentified | bug documentation tax | The long-name is currently Max Taxable AGI for Social Security; it should be Maximum Taxable
Earnings for Social Security. The description is similarly misleading. | 1.0 | SS_Income_c is misidentified - The long-name is currently Max Taxable AGI for Social Security; it should be Maximum Taxable
Earnings for Social Security. The description is similarly misleading. | non_priority | ss income c is misidentified the long name is currently max taxable agi for social security it should be maximum taxable earnings for social security the description is similarly misleading | 0 |
14,746 | 11,100,406,554 | IssuesEvent | 2019-12-16 19:08:06 | E3SM-Project/scream | https://api.github.com/repos/E3SM-Project/scream | closed | Make e3sm build with scream instead of cam | cmake infrastructure | We added a dummy `atm_mct.F90` file for mct to "build". Now we need to make cime/cmake machinery know how to build scream as atm component, instead of cam. | 1.0 | Make e3sm build with scream instead of cam - We added a dummy `atm_mct.F90` file for mct to "build". Now we need to make cime/cmake machinery know how to build scream as atm component, instead of cam. | non_priority | make build with scream instead of cam we added a dummy atm mct file for mct to build now we need to make cime cmake machinery know how to build scream as atm component instead of cam | 0 |
225,760 | 24,881,296,320 | IssuesEvent | 2022-10-28 01:31:17 | nidhi7598/linux-4.19.72 | https://api.github.com/repos/nidhi7598/linux-4.19.72 | closed | CVE-2019-12379 (Medium) detected in linuxlinux-4.19.254 - autoclosed | security vulnerability | ## CVE-2019-12379 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/tty/vt/consolemap.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/tty/vt/consolemap.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** An issue was discovered in con_insert_unipair in drivers/tty/vt/consolemap.c in the Linux kernel through 5.1.5. There is a memory leak in a certain case of an ENOMEM outcome of kmalloc. NOTE: This id is disputed as not being an issue.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12379>CVE-2019-12379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12379">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12379</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-12379 (Medium) detected in linuxlinux-4.19.254 - autoclosed - ## CVE-2019-12379 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/tty/vt/consolemap.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/tty/vt/consolemap.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** An issue was discovered in con_insert_unipair in drivers/tty/vt/consolemap.c in the Linux kernel through 5.1.5. There is a memory leak in a certain case of an ENOMEM outcome of kmalloc. NOTE: This id is disputed as not being an issue.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12379>CVE-2019-12379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12379">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12379</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers tty vt consolemap c drivers tty vt consolemap c vulnerability details disputed an issue was discovered in con insert unipair in drivers tty vt consolemap c in the linux kernel through there is a memory leak in a certain case of an enomem outcome of kmalloc note this id is disputed as not being an issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
145,496 | 22,702,507,965 | IssuesEvent | 2022-07-05 12:01:38 | EscolaDeSaudePublica/DesignLab | https://api.github.com/repos/EscolaDeSaudePublica/DesignLab | opened | Transcrever as histórias desenvolvidas | Oficina Design Sem Projeto Definido | ## **Objetivo**
**Como** designer
**Quero** transcrever as histórias das personas
**Para** organizar os achados da Oficina
## **Contexto**
- Por ocasião da realização das oficinas de Design de Serviços na ESP/CE (29 a 30 de julho de 2022), se faz necessário analisar e sistematizar os resultados obtidos, de modo a viabilizar o compartilhamento dos achados, gerar registro histórico do processo e viabilizar um melhor aproveitamento das informações no desenvolvimento das atividades seguintes.
## **Escopo**
- [ ] Transcrever as histórias das personas
## Observações
- Board da Oficina no figma em processo de construção:
https://www.figma.com/file/0NCBQdGdrnvop0vrV1nxSS/Board-Oficina-ESP?node-id=0%3A1
- Apresentar ao time de produtos
| 1.0 | Transcrever as histórias desenvolvidas - ## **Objetivo**
**Como** designer
**Quero** transcrever as histórias das personas
**Para** organizar os achados da Oficina
## **Contexto**
- Por ocasião da realização das oficinas de Design de Serviços na ESP/CE (29 a 30 de julho de 2022), se faz necessário analisar e sistematizar os resultados obtidos, de modo a viabilizar o compartilhamento dos achados, gerar registro histórico do processo e viabilizar um melhor aproveitamento das informações no desenvolvimento das atividades seguintes.
## **Escopo**
- [ ] Transcrever as histórias das personas
## Observações
- Board da Oficina no figma em processo de construção:
https://www.figma.com/file/0NCBQdGdrnvop0vrV1nxSS/Board-Oficina-ESP?node-id=0%3A1
- Apresentar ao time de produtos
| non_priority | transcrever as histórias desenvolvidas objetivo como designer quero transcrever as histórias das personas para organizar os achados da oficina contexto por ocasião da realização das oficinas de design de serviços na esp ce a de julho de se faz necessário analisar e sistematizar os resultados obtidos de modo a viabilizar o compartilhamento dos achados gerar registro histórico do processo e viabilizar um melhor aproveitamento das informações no desenvolvimento das atividades seguintes escopo transcrever as histórias das personas observações board da oficina no figma em processo de construção apresentar ao time de produtos | 0 |
43,158 | 17,458,149,508 | IssuesEvent | 2021-08-06 06:25:50 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | WEBSITE_VNET_ROUTE_ALL no longer a valid configuration | app-service/svc triaged cxp doc-enhancement Pri3 | As of approximately 7/28/2021, the WEBSITE_VNET_ROUTE_ALL configuration parameter no longer controls all network traffic being routed through the VNET. Rather, a site configuration, "vnetRouteAllEnabled" must be enabled in order to accomplish the same integration with the VNET and therefore allow the use of Private DNS Zones etc.

This caused us issues with our terraform scripts since the "vnetRouteAllEnabled" site config is not supported yet. We were able to find a workaround, but the documentation needs to be updated that WEBSITE_VNET_ROUTE_ALL is no longer how route all traffic through the VNET.
Edit: Add document details
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a7a98803-1438-b1b5-f543-7dd88bc4294e
* Version Independent ID: 37ff1d0f-ed8e-5e4d-1f4c-1b9f6cffb938
* Content: [Integrate app with Azure Virtual Network - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#routes)
* Content Source: [articles/app-service/web-sites-integrate-with-vnet.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/web-sites-integrate-with-vnet.md)
* Service: **app-service**
* GitHub Login: @ccompy
* Microsoft Alias: **ccompy** | 1.0 | WEBSITE_VNET_ROUTE_ALL no longer a valid configuration - As of approximately 7/28/2021, the WEBSITE_VNET_ROUTE_ALL configuration parameter no longer controls all network traffic being routed through the VNET. Rather, a site configuration, "vnetRouteAllEnabled" must be enabled in order to accomplish the same integration with the VNET and therefore allow the use of Private DNS Zones etc.

This caused us issues with our terraform scripts since the "vnetRouteAllEnabled" site config is not supported yet. We were able to find a workaround, but the documentation needs to be updated that WEBSITE_VNET_ROUTE_ALL is no longer how route all traffic through the VNET.
Edit: Add document details
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a7a98803-1438-b1b5-f543-7dd88bc4294e
* Version Independent ID: 37ff1d0f-ed8e-5e4d-1f4c-1b9f6cffb938
* Content: [Integrate app with Azure Virtual Network - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet#routes)
* Content Source: [articles/app-service/web-sites-integrate-with-vnet.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/web-sites-integrate-with-vnet.md)
* Service: **app-service**
* GitHub Login: @ccompy
* Microsoft Alias: **ccompy** | non_priority | website vnet route all no longer a valid configuration as of approximately the website vnet route all configuration parameter no longer controls all network traffic being routed through the vnet rather a site configuration vnetrouteallenabled must be enabled in order to accomplish the same integration with the vnet and therefore allow the use of private dns zones etc this caused us issues with our terraform scripts since the vnetrouteallenabled site config is not supported yet we were able to find a workaround but the documentation needs to be updated that website vnet route all is no longer how route all traffic through the vnet edit add document details document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login ccompy microsoft alias ccompy | 0 |
38,835 | 12,603,293,677 | IssuesEvent | 2020-06-11 13:15:45 | jgeraigery/logstash | https://api.github.com/repos/jgeraigery/logstash | opened | CVE-2019-17531 (High) detected in jackson-databind-2.9.10.jar | security vulnerability | ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/logstash/commit/201cee856b2ad93e442e269232049cdde83045a3">201cee856b2ad93e442e269232049cdde83045a3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-17531 (High) detected in jackson-databind-2.9.10.jar - ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/logstash/commit/201cee856b2ad93e442e269232049cdde83045a3">201cee856b2ad93e442e269232049cdde83045a3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library le caches modules files com fasterxml jackson core jackson databind jackson databind jar le caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload vulnerabilityurl | 0 |
12,523 | 5,205,005,915 | IssuesEvent | 2017-01-24 16:51:58 | zfsonlinux/zfs | https://api.github.com/repos/zfsonlinux/zfs | closed | kernel.org build failure introduced by 4ea3f86 | Build Issue | The following build failure was accidentally introduced for in-kernel builds as part of commit 4ea3f86. @gmelikov we're going to need to tackle this one right away either by addressing the warning or reverting the 4ea3f86.
http://build.zfsonlinux.org/builders/Kernel.org%20Built-in%20x86_64%20%28BUILD%29/builds/10676 | 1.0 | kernel.org build failure introduced by 4ea3f86 - The following build failure was accidentally introduced for in-kernel builds as part of commit 4ea3f86. @gmelikov we're going to need to tackle this one right away either by addressing the warning or reverting the 4ea3f86.
http://build.zfsonlinux.org/builders/Kernel.org%20Built-in%20x86_64%20%28BUILD%29/builds/10676 | non_priority | kernel org build failure introduced by the following build failure was accidentally introduced for in kernel builds as part of commit gmelikov we re going to need to tackle this one right away either by addressing the warning or reverting the | 0 |
94,857 | 19,597,533,437 | IssuesEvent | 2022-01-05 19:49:44 | TTTReborn/tttreborn | https://api.github.com/repos/TTTReborn/tttreborn | closed | Investigate lag compensation with traces | type/enhancement help wanted area/code | ## Prerequisites
- [x] I have searched [existing bug reports](https://github.com/TTTReborn/ttt-reborn/issues?q=label%3Aissue%2Fbug-report) and confirmed my issue is not already reported.
## Summary
FP recently added a new system that allows you to add lag compensation to traces. Their "Hover" game mode uses these for various functions. We should check this out and see if it'll improve hitreg for higher pinged players.
| 1.0 | Investigate lag compensation with traces - ## Prerequisites
- [x] I have searched [existing bug reports](https://github.com/TTTReborn/ttt-reborn/issues?q=label%3Aissue%2Fbug-report) and confirmed my issue is not already reported.
## Summary
FP recently added a new system that allows you to add lag compensation to traces. Their "Hover" game mode uses these for various functions. We should check this out and see if it'll improve hitreg for higher pinged players.
| non_priority | investigate lag compensation with traces prerequisites i have searched and confirmed my issue is not already reported summary fp recently added a new system that allows you to add lag compensation to traces their hover game mode uses these for various functions we should check this out and see if it ll improve hitreg for higher pinged players | 0 |
14,957 | 10,236,153,536 | IssuesEvent | 2019-08-19 10:52:38 | opencb/opencga | https://api.github.com/repos/opencb/opencga | closed | Upgrade protobuf and gRPC dependencies | task web services | Current _protobuf_ (3.5.1) and _gRPC_ (1.9.1) have not been upgraded in a couple of years. These and the maven plugin should be updated to newer versions. The latets gRPC version is 1.23.0 that depends on protobuf 3.9.0, So, the proposed versions are:
- protobuf 3.9.0
- gRPC 1.23.0
- maven plugin 0.6.1
Only patch versions will be updated during the rest of 2.0.0 release. | 1.0 | Upgrade protobuf and gRPC dependencies - Current _protobuf_ (3.5.1) and _gRPC_ (1.9.1) have not been upgraded in a couple of years. These and the maven plugin should be updated to newer versions. The latets gRPC version is 1.23.0 that depends on protobuf 3.9.0, So, the proposed versions are:
- protobuf 3.9.0
- gRPC 1.23.0
- maven plugin 0.6.1
Only patch versions will be updated during the rest of 2.0.0 release. | non_priority | upgrade protobuf and grpc dependencies current protobuf and grpc have not been upgraded in a couple of years these and the maven plugin should be updated to newer versions the latets grpc version is that depends on protobuf so the proposed versions are protobuf grpc maven plugin only patch versions will be updated during the rest of release | 0 |
90,023 | 15,856,047,811 | IssuesEvent | 2021-04-08 01:23:35 | Rossb0b/Filmographie-Angular | https://api.github.com/repos/Rossb0b/Filmographie-Angular | opened | WS-2019-0492 (High) detected in handlebars-4.1.2.tgz | security vulnerability | ## WS-2019-0492 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /Filmographie-Angular/package.json</p>
<p>Path to vulnerable library: Filmographie-Angular/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.8.7.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-19
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0492</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2019-11-19</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0492 (High) detected in handlebars-4.1.2.tgz - ## WS-2019-0492 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /Filmographie-Angular/package.json</p>
<p>Path to vulnerable library: Filmographie-Angular/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.8.7.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-19
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0492</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2019-11-19</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file filmographie angular package json path to vulnerable library filmographie angular node modules handlebars package json dependency hierarchy build angular tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the package s lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
54,734 | 6,845,665,140 | IssuesEvent | 2017-11-13 09:12:16 | mozilla/OpenDesign | https://api.github.com/repos/mozilla/OpenDesign | closed | Moderator / Branding Update | Logo design needed | ## Goal:
An updated icon that aligns with new branding for [Moderator](https://moderator.mozilla.org/)
## Info:
This request has come out of the current (work in progress) redesign of the SSO (Okta) Dashboard. See below. Currently all Mozilla sites are represented by the black highlight Zilla treatment. This is a placeholder until icons are available.
Yulia (Brand) is the originator of this request and should be kept apprised of progress/requests for feedback and direction.

## Style Information:
Single color icon that can be used with or without a wordmark is preferable. Please default to Yulia for further direction. Also, the site owner/team should be consulted for conceptual direction.
## Deadline:
Initial Beta release for the SSO (Auth0) Dashboard is currently in testing. If this can be ready prior to full release implementation, fantastic! Though understandable if that timeline is too aggressive and can not be met. We will have regular releases/feature enhancements and can integrate icon(s) updates as they are available.
## Tag:
Design Needed | 1.0 | Moderator / Branding Update - ## Goal:
An updated icon that aligns with new branding for [Moderator](https://moderator.mozilla.org/)
## Info:
This request has come out of the current (work in progress) redesign of the SSO (Okta) Dashboard. See below. Currently all Mozilla sites are represented by the black highlight Zilla treatment. This is a placeholder until icons are available.
Yulia (Brand) is the originator of this request and should be kept apprised of progress/requests for feedback and direction.

## Style Information:
Single color icon that can be used with or without a wordmark is preferable. Please default to Yulia for further direction. Also, the site owner/team should be consulted for conceptual direction.
## Deadline:
Initial Beta release for the SSO (Auth0) Dashboard is currently in testing. If this can be ready prior to full release implementation, fantastic! Though understandable if that timeline is too aggressive and can not be met. We will have regular releases/feature enhancements and can integrate icon(s) updates as they are available.
## Tag:
Design Needed | non_priority | moderator branding update goal an updated icon that aligns with new branding for info this request has come out of the current work in progress redesign of the sso okta dashboard see below currently all mozilla sites are represented by the black highlight zilla treatment this is a placeholder until icons are available yulia brand is the originator of this request and should be kept apprised of progress requests for feedback and direction style information single color icon that can be used with or without a wordmark is preferable please default to yulia for further direction also the site owner team should be consulted for conceptual direction deadline initial beta release for the sso dashboard is currently in testing if this can be ready prior to full release implementation fantastic though understandable if that timeline is too aggressive and can not be met we will have regular releases feature enhancements and can integrate icon s updates as they are available tag design needed | 0 |
177,182 | 21,465,743,127 | IssuesEvent | 2022-04-26 03:22:51 | DavidSpek/kubeflownotebooks | https://api.github.com/repos/DavidSpek/kubeflownotebooks | reopened | CVE-2021-29063 (High) detected in mpmath-1.2.1-py3-none-any.whl | security vulnerability | ## CVE-2021-29063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpmath-1.2.1-py3-none-any.whl</b></p></summary>
<p>Python library for arbitrary-precision floating-point arithmetic</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d4/cf/3965bddbb4f1a61c49aacae0e78fd1fe36b5dc36c797b31f30cf07dcbbb7/mpmath-1.2.1-py3-none-any.whl">https://files.pythonhosted.org/packages/d4/cf/3965bddbb4f1a61c49aacae0e78fd1fe36b5dc36c797b31f30cf07dcbbb7/mpmath-1.2.1-py3-none-any.whl</a></p>
<p>Path to dependency file: /jupyter-scipy/requirements.txt</p>
<p>Path to vulnerable library: /jupyter-scipy/requirements.txt,/jupyter-scipy/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **mpmath-1.2.1-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/kubeflownotebooks/commit/a48e16143e9d0b887c9d22dba7c3953b814d7b3d">a48e16143e9d0b887c9d22dba7c3953b814d7b3d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Mpmath v1.0.0 when the mpmathify function is called.
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29063>CVE-2021-29063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-29063">https://nvd.nist.gov/vuln/detail/CVE-2021-29063</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29063 (High) detected in mpmath-1.2.1-py3-none-any.whl - ## CVE-2021-29063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpmath-1.2.1-py3-none-any.whl</b></p></summary>
<p>Python library for arbitrary-precision floating-point arithmetic</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d4/cf/3965bddbb4f1a61c49aacae0e78fd1fe36b5dc36c797b31f30cf07dcbbb7/mpmath-1.2.1-py3-none-any.whl">https://files.pythonhosted.org/packages/d4/cf/3965bddbb4f1a61c49aacae0e78fd1fe36b5dc36c797b31f30cf07dcbbb7/mpmath-1.2.1-py3-none-any.whl</a></p>
<p>Path to dependency file: /jupyter-scipy/requirements.txt</p>
<p>Path to vulnerable library: /jupyter-scipy/requirements.txt,/jupyter-scipy/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **mpmath-1.2.1-py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/kubeflownotebooks/commit/a48e16143e9d0b887c9d22dba7c3953b814d7b3d">a48e16143e9d0b887c9d22dba7c3953b814d7b3d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Mpmath v1.0.0 when the mpmathify function is called.
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29063>CVE-2021-29063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-29063">https://nvd.nist.gov/vuln/detail/CVE-2021-29063</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in mpmath none any whl cve high severity vulnerability vulnerable library mpmath none any whl python library for arbitrary precision floating point arithmetic library home page a href path to dependency file jupyter scipy requirements txt path to vulnerable library jupyter scipy requirements txt jupyter scipy requirements txt dependency hierarchy x mpmath none any whl vulnerable library found in head commit a href found in base branch master vulnerability details a regular expression denial of service redos vulnerability was discovered in mpmath when the mpmathify function is called publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution no fix step up your open source security game with whitesource | 0 |
56,356 | 23,760,040,752 | IssuesEvent | 2022-09-01 08:06:27 | azure-deprecation/dashboard | https://api.github.com/repos/azure-deprecation/dashboard | closed | Azure Active Directory Connect v1 is retiring on 31 August 2022 | verified impact:upgrade-required area:feature services:active-directory cloud:public | Azure Active Directory Connect v1 is retiring on 31 August 2022
**Deadline:** Aug 31, 2022
**Impacted Services:**
- Azure Active Directory
**More information:**
- https://azure.microsoft.com/en-au/updates/action-required-upgrade-to-the-latest-version-of-azure-ad-connect-before-31-august-2022/
- https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version
### Notice
Here's the official report from Microsoft:
> On 31 August 2022, all 1.x versions of Azure Active Directory (Azure AD) Connect will be retired because they include SQL Server 2012 components that will no longer be supported. Upgrade to the most recent version of Azure AD Connect by that date.
### Timeline
| Phase | Date | Description |
|:------|------|-------------|
|Announcement|Sep 02, 2021|Deprecation was announced|
|Deprecation|Aug 31, 2022|Using it is no longer supported, effective impact is unclear|
### Impact
Azure Active Directory Connect v1 is retiring on 31 August 2022 and an upgrade is required.
### Required Action
A migration guide is available in the [documentation](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version).
Here's the official report from Microsoft:
> To avoid service disruptions, [upgrade to the latest version of Azure AD Connect](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version) before 31 August 2022.
### Contact
You can get in touch through the following options:
- Contact the product group through email ([email](mailto:example@example.com)).
- Get answers from Microsoft Q&A ([link](mailto:https://aka.ms/qna-azure-ad-connect)).
- Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
| 1.0 | Azure Active Directory Connect v1 is retiring on 31 August 2022 - Azure Active Directory Connect v1 is retiring on 31 August 2022
**Deadline:** Aug 31, 2022
**Impacted Services:**
- Azure Active Directory
**More information:**
- https://azure.microsoft.com/en-au/updates/action-required-upgrade-to-the-latest-version-of-azure-ad-connect-before-31-august-2022/
- https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version
### Notice
Here's the official report from Microsoft:
> On 31 August 2022, all 1.x versions of Azure Active Directory (Azure AD) Connect will be retired because they include SQL Server 2012 components that will no longer be supported. Upgrade to the most recent version of Azure AD Connect by that date.
### Timeline
| Phase | Date | Description |
|:------|------|-------------|
|Announcement|Sep 02, 2021|Deprecation was announced|
|Deprecation|Aug 31, 2022|Using it is no longer supported, effective impact is unclear|
### Impact
Azure Active Directory Connect v1 is retiring on 31 August 2022 and an upgrade is required.
### Required Action
A migration guide is available in the [documentation](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version).
Here's the official report from Microsoft:
> To avoid service disruptions, [upgrade to the latest version of Azure AD Connect](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-upgrade-previous-version) before 31 August 2022.
### Contact
You can get in touch through the following options:
- Contact the product group through email ([email](mailto:example@example.com)).
- Get answers from Microsoft Q&A ([link](mailto:https://aka.ms/qna-azure-ad-connect)).
- Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
| non_priority | azure active directory connect is retiring on august azure active directory connect is retiring on august deadline aug impacted services azure active directory more information notice here s the official report from microsoft on august all x versions of azure active directory azure ad connect will be retired because they include sql server components that will no longer be supported upgrade to the most recent version of azure ad connect by that date timeline phase date description announcement sep deprecation was announced deprecation aug using it is no longer supported effective impact is unclear impact azure active directory connect is retiring on august and an upgrade is required required action a migration guide is available in the here s the official report from microsoft to avoid service disruptions before august contact you can get in touch through the following options contact the product group through email mailto example example com get answers from microsoft q a mailto contact azure support | 0 |
20,492 | 6,041,571,871 | IssuesEvent | 2017-06-11 02:31:38 | Learning-Fuze/C2.17_flash_cards | https://api.github.com/repos/Learning-Fuze/C2.17_flash_cards | closed | Commented Out Code Blocks | code style | The master branch's source code should not contain blocks of commented out code, e.g. [client/src/actions/index.js: line 51](https://github.com/Learning-Fuze/C2.17_flash_cards/blob/master/client/src/actions/index.js#L51) | 1.0 | Commented Out Code Blocks - The master branch's source code should not contain blocks of commented out code, e.g. [client/src/actions/index.js: line 51](https://github.com/Learning-Fuze/C2.17_flash_cards/blob/master/client/src/actions/index.js#L51) | non_priority | commented out code blocks the master branch s source code should not contain blocks of commented out code e g | 0 |
39,074 | 6,719,289,750 | IssuesEvent | 2017-10-15 22:28:40 | trevorstephens/gplearn | https://api.github.com/repos/trevorstephens/gplearn | closed | How to write custom function with make_function? | documentation question | I did the following and gotten errors:
from gplearn.functions import make_function
def internaltanh(x):
return np.tanh(x1)
dtanh = make_function(function=internaltanh, name='dtanh',arity=1)
function_set2 = ['add', 'sub', 'mul', 'div', 'sqrt', 'log','dtanh', 'abs', 'neg', 'inv']
>>> gp2 = SymbolicTransformer(generations=20, population_size=5000,
... hall_of_fame=100, n_components=10,
... function_set=function_set2,
... parsimony_coefficient=0.0001,
... max_samples=0.9, verbose=1,
... random_state=0, n_jobs=3)
>>> gp2.fit(xtrain2, xtrain.y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/gplearn/genetic.py", line 317, in fit
'`function_set`.' % function)
ValueError: invalid function name dtanh found in `function_set`.
How can we fix this error? Can we have an example?
Thanks
Dr Patrick
| 1.0 | How to write custom function with make_function? - I did the following and gotten errors:
from gplearn.functions import make_function
def internaltanh(x):
return np.tanh(x1)
dtanh = make_function(function=internaltanh, name='dtanh',arity=1)
function_set2 = ['add', 'sub', 'mul', 'div', 'sqrt', 'log','dtanh', 'abs', 'neg', 'inv']
>>> gp2 = SymbolicTransformer(generations=20, population_size=5000,
... hall_of_fame=100, n_components=10,
... function_set=function_set2,
... parsimony_coefficient=0.0001,
... max_samples=0.9, verbose=1,
... random_state=0, n_jobs=3)
>>> gp2.fit(xtrain2, xtrain.y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/gplearn/genetic.py", line 317, in fit
'`function_set`.' % function)
ValueError: invalid function name dtanh found in `function_set`.
How can we fix this error? Can we have an example?
Thanks
Dr Patrick
| non_priority | how to write custom function with make function i did the following and gotten errors from gplearn functions import make function def internaltanh x return np tanh dtanh make function function internaltanh name dtanh arity function symbolictransformer generations population size hall of fame n components function set function parsimony coefficient max samples verbose random state n jobs fit xtrain y traceback most recent call last file line in file usr local lib dist packages gplearn genetic py line in fit function set function valueerror invalid function name dtanh found in function set how can we fix this error can we have an example thanks dr patrick | 0 |
243,261 | 18,679,994,892 | IssuesEvent | 2021-11-01 03:26:25 | hotosm/osm-stats-api | https://api.github.com/repos/hotosm/osm-stats-api | closed | Embed documentation comments in Python code | type:documentation | While Python uses pydoc, which uses comment strings in the class definitions, Doxygen produces web viewable documentation by parsing Doxygen specific comments. This enables people to get a list of all files, classes, methods, etc.. which lets others analyze the code from a high-level. As code gets developed, documentation will also grow. This task is mostly a reminder to integrate Doxygen comments into code development along with sufficient comments for pydoc. Documentation is important to let others learn about our project if they want to contribute or use it. | 1.0 | Embed documentation comments in Python code - While Python uses pydoc, which uses comment strings in the class definitions, Doxygen produces web viewable documentation by parsing Doxygen specific comments. This enables people to get a list of all files, classes, methods, etc.. which lets others analyze the code from a high-level. As code gets developed, documentation will also grow. This task is mostly a reminder to integrate Doxygen comments into code development along with sufficient comments for pydoc. Documentation is important to let others learn about our project if they want to contribute or use it. | non_priority | embed documentation comments in python code while python uses pydoc which uses comment strings in the class definitions doxygen produces web viewable documentation by parsing doxygen specific comments this enables people to get a list of all files classes methods etc which lets others analyze the code from a high level as code gets developed documentation will also grow this task is mostly a reminder to integrate doxygen comments into code development along with sufficient comments for pydoc documentation is important to let others learn about our project if they want to contribute or use it | 0 |
209,742 | 16,057,750,462 | IssuesEvent | 2021-04-23 08:10:16 | w3c/csswg-drafts | https://api.github.com/repos/w3c/csswg-drafts | closed | [css-text] U+205F Medium Mathematical Space (MMSP) not mentioned | Closed Accepted by Editor Discretion Commenter Timed Out (Assumed Satisfied) Tested Tracked in DoC css-text-3 i18n-tracker | https://www.w3.org/TR/css-text-3/#word-spacing-property says "General punctuation and fixed-width spaces (such as U+3000 and U+2000 through U+200A) are not considered word-separator characters."
U+205F Medium Mathematical Space is not mentioned and so it is not clear if it is considered a word-separator character. I'm guessing not. It should be mentioned to make it clear. | 1.0 | [css-text] U+205F Medium Mathematical Space (MMSP) not mentioned - https://www.w3.org/TR/css-text-3/#word-spacing-property says "General punctuation and fixed-width spaces (such as U+3000 and U+2000 through U+200A) are not considered word-separator characters."
U+205F Medium Mathematical Space is not mentioned and so it is not clear if it is considered a word-separator character. I'm guessing not. It should be mentioned to make it clear. | non_priority | u medium mathematical space mmsp not mentioned says general punctuation and fixed width spaces such as u and u through u are not considered word separator characters u medium mathematical space is not mentioned and so it is not clear if it is considered a word separator character i m guessing not it should be mentioned to make it clear | 0 |
105 | 2,508,899,962 | IssuesEvent | 2015-01-13 09:11:21 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Bindgen-based build tool that creates self-contained hybrid crates from foreign libraries | A-build A-pkg E-easy I-wishlist | When I build bindings to foreign libraries now I prefer to build static native libraries and link them statically into the rust crate. This is a very convenient form for foreign libraries to be in because then you don't have to worry about native dynamic linking rules - everything is a rust crate and follows rust rules. Treating foreign libraries as crates is also potentially convenient for build tools like cargo, which currently has little or no support for them.
The process of constructing this type of hybrid crate is pretty straightforward but there isn't any language or tool support for it.
What I want is a little library that, given the proper configuration, will
* Run custom build logic to generate a static library archive file at a specific location
* Run bindgen to generate the rust bindings
* Output the proper set of attributes to make rust find and link to the static library
This could start out as a standalone build tool, but would need to end up compatible with cargo, and possibly even a rustc plugin that does the building, linking and bindgenating.
| 1.0 | Bindgen-based build tool that creates self-contained hybrid crates from foreign libraries - When I build bindings to foreign libraries now I prefer to build static native libraries and link them statically into the rust crate. This is a very convenient form for foreign libraries to be in because then you don't have to worry about native dynamic linking rules - everything is a rust crate and follows rust rules. Treating foreign libraries as crates is also potentially convenient for build tools like cargo, which currently has little or no support for them.
The process of constructing this type of hybrid crate is pretty straightforward but there isn't any language or tool support for it.
What I want is a little library that, given the proper configuration, will
* Run custom build logic to generate a static library archive file at a specific location
* Run bindgen to generate the rust bindings
* Output the proper set of attributes to make rust find and link to the static library
This could start out as a standalone build tool, but would need to end up compatible with cargo, and possibly even a rustc plugin that does the building, linking and bindgenating.
| non_priority | bindgen based build tool that creates self contained hybrid crates from foreign libraries when i build bindings to foreign libraries now i prefer to build static native libraries and link them statically into the rust crate this is a very convenient form for foreign libraries to be in because then you don t have to worry about native dynamic linking rules everything is a rust crate and follows rust rules treating foreign libraries as crates is also potentially convenient for build tools like cargo which currently has little or no support for them the process of constructing this type of hybrid crate is pretty straightforward but there isn t any language or tool support for it what i want is a little library that given the proper configuration will run custom build logic to generate a static library archive file at a specific location run bindgen to generate the rust bindings output the proper set of attributes to make rust find and link to the static library this could start out as a standalone build tool but would need to end up compatible with cargo and possibly even a rustc plugin that does the building linking and bindgenating | 0 |
411,180 | 27,814,993,319 | IssuesEvent | 2023-03-18 15:25:46 | elementbound/nlon | https://api.github.com/repos/elementbound/nlon | closed | Move site to Jekyll | documentation | - Move all tutorials to root
- Generate .md API docs with jsdoc-to-md
- Create site using Jekyll and [GitBook theme](https://github.com/sighingnow/jekyll-gitbook)
- Update workflow to publish Jekyll site to GH pages
- Make sure API doc links _between_ modules work
- Bonus: Make sure links work both on GH and site? | 1.0 | Move site to Jekyll - - Move all tutorials to root
- Generate .md API docs with jsdoc-to-md
- Create site using Jekyll and [GitBook theme](https://github.com/sighingnow/jekyll-gitbook)
- Update workflow to publish Jekyll site to GH pages
- Make sure API doc links _between_ modules work
- Bonus: Make sure links work both on GH and site? | non_priority | move site to jekyll move all tutorials to root generate md api docs with jsdoc to md create site using jekyll and update workflow to publish jekyll site to gh pages make sure api doc links between modules work bonus make sure links work both on gh and site | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.