Unnamed: 0 int64 9 832k | id float64 2.5B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 4 323 | labels stringlengths 4 2.67k | body stringlengths 23 107k | index stringclasses 4 values | text_combine stringlengths 96 107k | label stringclasses 2 values | text stringlengths 96 56.1k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
794 | 10,550,587,885 | IssuesEvent | 2019-10-03 11:26:12 | Cha-OS/colabo | https://api.github.com/repos/Cha-OS/colabo | opened | Net or Server accessing errors | IMPORTANT UX.UsrOnBoard+AvoidUsrErr backend moderation performance reliability | - if the server is unavailable, inform the user, e.g. after several retries or HTTP request FAIL response
- let him TRY again.
- NOTIFICATIONS or TOOLBAR Status
- you're offline (re-introduce it from the old code)
- you're net is weak | True | Net or Server accessing errors - - if the server is unavailable, inform the user, e.g. after several retries or HTTP request FAIL response
- let him TRY again.
- NOTIFICATIONS or TOOLBAR Status
- you're offline (re-introduce it from the old code)
- you're net is weak | reli | net or server accessing errors if the server is unavailable inform the user e g after several retries or http request fail response let him try again notifications or toolbar status you re offline re introduce it from the old code you re net is weak | 1 |
534 | 8,391,959,645 | IssuesEvent | 2018-10-09 16:15:29 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions | area-System.Net.Http.SocketsHttpHandler bug needs more info tenet-reliability | Migrated from [#3575](https://github.com/aspnet/Home/issues/3575)...
The call from my Asp.NET Core app is just a standard http get to retrieve OData metadata, e.g. `GET /api/odata/asset/$metadata `- this works 99.9% of the time but occasionally it fails and I get a HttpRequestException reported in Application Insights..
```
System.Net.Http.HttpRequestException: An attempt was made to access a socket in a way forbidden by its access permissions
---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
```
There are webjobs in the same app service plan and [#1876](https://github.com/Azure/azure-webjobs-sdk/issues/1876) seems to be similar (see last two comments), what I'm not sure is whether this error gets raised per endpoint you talk to, or across all outbound requests.
There are no warnings at the app service plan level regarding port exhaustion and the endpoint I'm trying to talk to is in the same app service plan and is actually a virtual application within the same web site. | True | System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions - Migrated from [#3575](https://github.com/aspnet/Home/issues/3575)...
The call from my Asp.NET Core app is just a standard http get to retrieve OData metadata, e.g. `GET /api/odata/asset/$metadata `- this works 99.9% of the time but occasionally it fails and I get a HttpRequestException reported in Application Insights..
```
System.Net.Http.HttpRequestException: An attempt was made to access a socket in a way forbidden by its access permissions
---> System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
```
There are webjobs in the same app service plan and [#1876](https://github.com/Azure/azure-webjobs-sdk/issues/1876) seems to be similar (see last two comments), what I'm not sure is whether this error gets raised per endpoint you talk to, or across all outbound requests.
There are no warnings at the app service plan level regarding port exhaustion and the endpoint I'm trying to talk to is in the same app service plan and is actually a virtual application within the same web site. | reli | system net sockets socketexception an attempt was made to access a socket in a way forbidden by its access permissions migrated from the call from my asp net core app is just a standard http get to retrieve odata metadata e g get api odata asset metadata this works of the time but occasionally it fails and i get a httprequestexception reported in application insights system net http httprequestexception an attempt was made to access a socket in a way forbidden by its access permissions system net sockets socketexception an attempt was made to access a socket in a way forbidden by its access permissions at system net http connecthelper connectasync string host port cancellationtoken cancellationtoken end of inner exception stack trace at system net http connecthelper connectasync string host port cancellationtoken cancellationtoken at system net http httpconnectionpool createconnectionasync httprequestmessage request cancellationtoken cancellationtoken at system net http httpconnectionpool waitforcreatedconnectionasync valuetask creationtask at system net http httpconnectionpool sendwithretryasync httprequestmessage request boolean dorequestauth cancellationtoken cancellationtoken at system net http redirecthandler sendasync httprequestmessage request cancellationtoken cancellationtoken at system net http diagnosticshandler sendasync httprequestmessage request cancellationtoken cancellationtoken there are webjobs in the same app service plan and seems to be similar see last two comments what i m not sure is whether this error gets raised per endpoint you talk to or across all outbound requests there are no warnings at the app service plan level regarding port exhaustion and the endpoint i m trying to talk to is in the same app service plan and is actually a virtual application within the same web site | 1 |
32,279 | 8,824,395,070 | IssuesEvent | 2019-01-02 16:51:31 | docker/docker.github.io | https://api.github.com/repos/docker/docker.github.io | closed | FROM syntax spec is not clear about hash algo | content/builder | File: [engine/reference/builder.md](https://docs.docker.com/engine/reference/builder/), CC @gbarr01
I was getting really stuck on the docs for the `FROM` directive in the `Dockerfile` format, which says:
FROM <image>[@<digest>] [AS <name>]
So I was using:
FROM alpine@3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106 AS build
which produces:
> invalid reference format
After much searching, and not finding any examples of pinning to a hash, I tried this on a whim:
FROM alpine@sha256:3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106 AS build
That worked. Thus, since I don't think it is clear that `<digest>` needs to specify the hash algorithm and a colon, can we change this to:
FROM <image>[@<hash-algo>:<digest>] [AS <name>]
Or, can an example be added in this section, so it is clear that the algo is required?
| 1.0 | FROM syntax spec is not clear about hash algo - File: [engine/reference/builder.md](https://docs.docker.com/engine/reference/builder/), CC @gbarr01
I was getting really stuck on the docs for the `FROM` directive in the `Dockerfile` format, which says:
FROM <image>[@<digest>] [AS <name>]
So I was using:
FROM alpine@3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106 AS build
which produces:
> invalid reference format
After much searching, and not finding any examples of pinning to a hash, I tried this on a whim:
FROM alpine@sha256:3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106 AS build
That worked. Thus, since I don't think it is clear that `<digest>` needs to specify the hash algorithm and a colon, can we change this to:
FROM <image>[@<hash-algo>:<digest>] [AS <name>]
Or, can an example be added in this section, so it is clear that the algo is required?
| non_reli | from syntax spec is not clear about hash algo file cc i was getting really stuck on the docs for the from directive in the dockerfile format which says from so i was using from alpine as build which produces invalid reference format after much searching and not finding any examples of pinning to a hash i tried this on a whim from alpine as build that worked thus since i don t think it is clear that needs to specify the hash algorithm and a colon can we change this to from or can an example be added in this section so it is clear that the algo is required | 0 |
1,092 | 13,041,829,055 | IssuesEvent | 2020-07-28 21:08:34 | mozilla/hubs | https://api.github.com/repos/mozilla/hubs | closed | Move from node-sass to dart-sass | enhancement reliability | I think we could improve developer experience by moving from `node-sass` to `dart-sass`. `node-sass` works fine, but uses a native module using`node-gyp` that requires Python 2.7 to be installed. `node-dart` is a drop in replacement and doesn't have this dependency.
| True | Move from node-sass to dart-sass - I think we could improve developer experience by moving from `node-sass` to `dart-sass`. `node-sass` works fine, but uses a native module using`node-gyp` that requires Python 2.7 to be installed. `node-dart` is a drop in replacement and doesn't have this dependency.
| reli | move from node sass to dart sass i think we could improve developer experience by moving from node sass to dart sass node sass works fine but uses a native module using node gyp that requires python to be installed node dart is a drop in replacement and doesn t have this dependency | 1 |
57,977 | 11,812,356,942 | IssuesEvent | 2020-03-19 20:00:17 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Do not pass value types to Object.ReferenceEquals | api-suggestion area-System.Runtime code-analyzer untriaged | Calls to `ReferenceEquals` where we can detect a value type is being passed in are invariably wrong, as the value type will be boxed, and regardless of its value, `ReferenceEquals` will always return `false`.
**Category**: Reliability | 1.0 | Do not pass value types to Object.ReferenceEquals - Calls to `ReferenceEquals` where we can detect a value type is being passed in are invariably wrong, as the value type will be boxed, and regardless of its value, `ReferenceEquals` will always return `false`.
**Category**: Reliability | non_reli | do not pass value types to object referenceequals calls to referenceequals where we can detect a value type is being passed in are invariably wrong as the value type will be boxed and regardless of its value referenceequals will always return false category reliability | 0 |
570 | 8,656,706,433 | IssuesEvent | 2018-11-27 19:12:37 | m3db/m3 | https://api.github.com/repos/m3db/m3 | opened | Make peers bootstrapper auto-detect when all other peers are not bootstrapped and return success | C: Bootstrap G: Data Integrity P: Medium T: Reliability T: Usability area:db | Right now when catastrophic failures happen (all nodes crash due to datacenter powerloss or all the nodes run out of disk space or OOM around the same time), recovery can become very difficult if any of the nodes encounter corrupt commitlogs. The reason for this is that if the commitlog bootstrapper encounters any corrupt commitlog files, it will mark the entire bootstrap range as unfulfilled.
In normal situations, that is the desired behavior as it allows the peers bootstrapper to repair any corrupt data. However, in catastrophic failures, all the nodes will end up stuck in the peers bootstrapper unable to bootstrap from each other because they can't achieve read consistency.
We should add logic to the peers bootstrapper that detects when a read consistency cannot be achieved for a given shard (due to too many also being stuck in the bootstrapping phase) and if the host is in the LEAVING or AVAILABLE state for that shard then we should just succeed the bootstrap. | True | Make peers bootstrapper auto-detect when all other peers are not bootstrapped and return success - Right now when catastrophic failures happen (all nodes crash due to datacenter powerloss or all the nodes run out of disk space or OOM around the same time), recovery can become very difficult if any of the nodes encounter corrupt commitlogs. The reason for this is that if the commitlog bootstrapper encounters any corrupt commitlog files, it will mark the entire bootstrap range as unfulfilled.
In normal situations, that is the desired behavior as it allows the peers bootstrapper to repair any corrupt data. However, in catastrophic failures, all the nodes will end up stuck in the peers bootstrapper unable to bootstrap from each other because they can't achieve read consistency.
We should add logic to the peers bootstrapper that detects when a read consistency cannot be achieved for a given shard (due to too many also being stuck in the bootstrapping phase) and if the host is in the LEAVING or AVAILABLE state for that shard then we should just succeed the bootstrap. | reli | make peers bootstrapper auto detect when all other peers are not bootstrapped and return success right now when catastrophic failures happen all nodes crash due to datacenter powerloss or all the nodes run out of disk space or oom around the same time recovery can become very difficult if any of the nodes encounter corrupt commitlogs the reason for this is that if the commitlog bootstrapper encounters any corrupt commitlog files it will mark the entire bootstrap range as unfulfilled in normal situations that is the desired behavior as it allows the peers bootstrapper to repair any corrupt data however in catastrophic failures all the nodes will end up stuck in the peers bootstrapper unable to bootstrap from each other because they can t achieve read consistency we should add logic to the peers bootstrapper that detects when a read consistency cannot be achieved for a given shard due to too many also being stuck in the bootstrapping phase and if the host is in the leaving or available state for that shard then we should just succeed the bootstrap | 1 |
265,211 | 8,343,633,001 | IssuesEvent | 2018-09-30 07:14:52 | minio/minio | https://api.github.com/repos/minio/minio | closed | minio some node return 403 | community priority: medium triage | <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
work better
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
there is a clusters of 4 nodes ,but sometime will return 403 from http response, http header like this:
HTTP/1.1 403 Forbidden
Accept-Ranges: bytes
Content-Security-Policy: block-all-mixed-content
Server: Minio/RELEASE.2018-09-25T21-34-43Z (linux; amd64)
Vary: Origin
X-Amz-Request-Id: 1558352F1FFDC4D8
X-Xss-Protection: 1; mode=block
Date: Thu, 27 Sep 2018 08:42:29 GMT
minio error log:
Error: volume not found
disk=http://ipaddress:9090/app/minio
1: cmd/logger/logger.go:294:logger.LogIf()
2: cmd/xl-v1-utils.go:309:cmd.readXLMeta()
3: cmd/xl-v1-utils.go:341:cmd.readAllXLMetadata.func1()
minion start command:
MINIO_ACCESS_KEY=1 MINIO_SECRET_KEY=2 minio server --address ip:9090 http://ip1:9090/app/minio http://ip2:9090/app/minio http://ip3:9090/app/minio http://ip4:9090/app/minio
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
3.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
Version: 2018-09-25T21:34:43Z
Release-Tag: RELEASE.2018-09-25T21-34-43Z
Commit-ID: aa4e2b1542b98097e08680f21b790de0b776378c
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
Ubuntu 16.04.3 LTS
* Operating System and version (`uname -a`):
Linux hostname 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
| 1.0 | minio some node return 403 - <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
work better
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
there is a clusters of 4 nodes ,but sometime will return 403 from http response, http header like this:
HTTP/1.1 403 Forbidden
Accept-Ranges: bytes
Content-Security-Policy: block-all-mixed-content
Server: Minio/RELEASE.2018-09-25T21-34-43Z (linux; amd64)
Vary: Origin
X-Amz-Request-Id: 1558352F1FFDC4D8
X-Xss-Protection: 1; mode=block
Date: Thu, 27 Sep 2018 08:42:29 GMT
minio error log:
Error: volume not found
disk=http://ipaddress:9090/app/minio
1: cmd/logger/logger.go:294:logger.LogIf()
2: cmd/xl-v1-utils.go:309:cmd.readXLMeta()
3: cmd/xl-v1-utils.go:341:cmd.readAllXLMetadata.func1()
minion start command:
MINIO_ACCESS_KEY=1 MINIO_SECRET_KEY=2 minio server --address ip:9090 http://ip1:9090/app/minio http://ip2:9090/app/minio http://ip3:9090/app/minio http://ip4:9090/app/minio
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
3.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`):
Version: 2018-09-25T21:34:43Z
Release-Tag: RELEASE.2018-09-25T21-34-43Z
Commit-ID: aa4e2b1542b98097e08680f21b790de0b776378c
* Environment name and version (e.g. nginx 1.9.1):
* Server type and version:
Ubuntu 16.04.3 LTS
* Operating System and version (`uname -a`):
Linux hostname 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
| non_reli | minio some node return expected behavior work better current behavior there is a clusters of nodes ,but sometime will return from http response, http header like this http forbidden accept ranges bytes content security policy block all mixed content server minio release linux vary origin x amz request id x xss protection mode block date thu sep gmt minio error log error volume not found disk cmd logger logger go logger logif cmd xl utils go cmd readxlmeta cmd xl utils go cmd readallxlmetadata minion start command minio access key minio secret key minio server address ip possible solution steps to reproduce for bugs context regression your environment version used minio version version release tag release commit id environment name and version e g nginx server type and version ubuntu lts operating system and version uname a linux hostname generic ubuntu smp tue jul utc gnu linux link to your project | 0 |
2,203 | 24,137,608,714 | IssuesEvent | 2022-09-21 12:37:30 | Azure/PSRule.Rules.Azure | https://api.github.com/repos/Azure/PSRule.Rules.Azure | opened | Enable purge protection for App Configuration stores | ms-hack-2022 rule: app-configuration pillar: reliability | # Rule request
## Suggested rule change
App Configuration supports purge protection to extend the protection provided by soft-delete. Purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval.
This feature only applies to Standard SKU configuration stores. Free configuration stores should be ignored by still rule.
This is enabled by setting the `properties.enablePurgeProtection` property to `true`.
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.AppConfiguration/configurationStores**
## Additional context
[Azure deployment reference](https://learn.microsoft.com/azure/templates/microsoft.appconfiguration/configurationstores)
[Purge protection](https://learn.microsoft.com/azure/azure-app-configuration/concept-soft-delete#purge-protection)
| True | Enable purge protection for App Configuration stores - # Rule request
## Suggested rule change
App Configuration supports purge protection to extend the protection provided by soft-delete. Purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval.
This feature only applies to Standard SKU configuration stores. Free configuration stores should be ignored by still rule.
This is enabled by setting the `properties.enablePurgeProtection` property to `true`.
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.AppConfiguration/configurationStores**
## Additional context
[Azure deployment reference](https://learn.microsoft.com/azure/templates/microsoft.appconfiguration/configurationstores)
[Purge protection](https://learn.microsoft.com/azure/azure-app-configuration/concept-soft-delete#purge-protection)
| reli | enable purge protection for app configuration stores rule request suggested rule change app configuration supports purge protection to extend the protection provided by soft delete purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval this feature only applies to standard sku configuration stores free configuration stores should be ignored by still rule this is enabled by setting the properties enablepurgeprotection property to true applies to the following the rule applies to the following resource type microsoft appconfiguration configurationstores additional context | 1 |
28,870 | 11,705,970,798 | IssuesEvent | 2020-03-07 19:07:22 | vlaship/ws | https://api.github.com/repos/vlaship/ws | opened | CVE-2020-9547 (Medium) detected in jackson-databind-2.8.11.3.jar | security vulnerability | ## CVE-2020-9547 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library)
- spring-boot-starter-web-1.5.22.RELEASE.jar
- :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/ws/commit/189f4086730e4a06b79e39bcd40240d46674604f">189f4086730e4a06b79e39bcd40240d46674604f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-9547 (Medium) detected in jackson-databind-2.8.11.3.jar - ## CVE-2020-9547 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/ws/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.3/844df5aba5a1a56e00905b165b12bb34116ee858/jackson-databind-2.8.11.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-websocket-1.5.22.RELEASE.jar (Root Library)
- spring-boot-starter-web-1.5.22.RELEASE.jar
- :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/ws/commit/189f4086730e4a06b79e39bcd40240d46674604f">189f4086730e4a06b79e39bcd40240d46674604f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file tmp ws scm ws build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter websocket release jar root library spring boot starter web release jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com ibatis sqlmap engine transaction jta jtatransactionconfig aka ibatis sqlmap publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
167,439 | 6,338,381,020 | IssuesEvent | 2017-07-27 04:11:25 | apex/up | https://api.github.com/repos/apex/up | opened | Better error when creds are missing | Priority UX | will do the wizard style thing later to get people set up, in Beta or 0.1.0, but for now the default AWS stuff sucks:
```
⨯ error deploying to us-west-2: fetching function config: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
``` | 1.0 | Better error when creds are missing - will do the wizard style thing later to get people set up, in Beta or 0.1.0, but for now the default AWS stuff sucks:
```
⨯ error deploying to us-west-2: fetching function config: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
``` | non_reli | better error when creds are missing will do the wizard style thing later to get people set up in beta or but for now the default aws stuff sucks ⨯ error deploying to us west fetching function config nocredentialproviders no valid providers in chain deprecated for verbose messaging see aws config credentialschainverboseerrors | 0 |
550 | 8,553,686,135 | IssuesEvent | 2018-11-08 02:04:49 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | opened | Repeatedly calling Utf8JsonReader.Read() after a JsonReaderException has been thrown should continue to fail deterministically. | area-System.Text.Json tenet-reliability up-for-grabs | This was brought up in the API review. We should validate that multi retries to read after we enter a failure state continues to fail reliably and deterministically.
cc @marek-safar, @GrabYourPitchforks | True | Repeatedly calling Utf8JsonReader.Read() after a JsonReaderException has been thrown should continue to fail deterministically. - This was brought up in the API review. We should validate that multi retries to read after we enter a failure state continues to fail reliably and deterministically.
cc @marek-safar, @GrabYourPitchforks | reli | repeatedly calling read after a jsonreaderexception has been thrown should continue to fail deterministically this was brought up in the api review we should validate that multi retries to read after we enter a failure state continues to fail reliably and deterministically cc marek safar grabyourpitchforks | 1 |
360,147 | 10,684,759,617 | IssuesEvent | 2019-10-22 11:13:22 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | olymptrade.com - see bug description | browser-firefox engine-gecko priority-normal | <!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://olymptrade.com/platform
**Browser / Version**: Firefox 71.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Charts on the trading site won't load
**Steps to Reproduce**:
This is a binary options trading site. Today the charts could not be loaded (displayed). In Opera they are loaded.
[](https://webcompat.com/uploads/2019/10/527b497c-079a-43c3-b2b0-b3de86e778ba.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191014171118</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'error', 'log': [' / ServiceWorker https://olymptrade.com/: .'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:1839544'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': ['This page uses the non standard property zoom. Consider using calc() in the relevant property values, or using transform along with transform-origin: 0 0.'], 'uri': 'https://livetex-widget.nanotech42.com/js/ui.js?v=7.1.362', 'pos': '1:481815'}, {'level': 'warn', 'log': [' https://balancer-cloud.livetex.ru/get-server/?site_id=154580&__fallback__&=&_m=GET&_c=njr_1_callback&_t=jsonp&_rnd=4scxgu7m96k&_h[lt-origin]=account%3A222283%3Asite%3A154580 , MIME- (text/plain) JavaScript.'], 'uri': 'https://livetex-widget.nanotech42.com/js/iframe.html', 'pos': '0:0'}, {'level': 'warn', 'log': [' <script> https://io5-production-3-ltx242.livetex.ru/visitor/auth?__fallback__&=&_m=POST&_c=njr_2_callback&_t=jsonp&_=%7B%22is_mobile%22%3Afalse%7D&_rnd=65nd8g7gt0n&_h[lt-origin]=account%3A222283%3Asite%3A154580 .'], 'uri': 'https://livetex-widget.nanotech42.com/js/iframe.html', 'pos': '1:1'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': [' onmozfullscreenchange .'], 'uri': 'https://olymptrade.com/platform', 'pos': '0:0'}, {'level': 'warn', 'log': [' onmozfullscreenerror .'], 'uri': 'https://olymptrade.com/platform', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | olymptrade.com - see bug description - <!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://olymptrade.com/platform
**Browser / Version**: Firefox 71.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Charts on the trading site won't load
**Steps to Reproduce**:
This is a binary options trading site. Today the charts could not be loaded (displayed). In Opera they are loaded.
[](https://webcompat.com/uploads/2019/10/527b497c-079a-43c3-b2b0-b3de86e778ba.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191014171118</li><li>channel: aurora</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'error', 'log': [' / ServiceWorker https://olymptrade.com/: .'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:1839544'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': ['This page uses the non standard property zoom. Consider using calc() in the relevant property values, or using transform along with transform-origin: 0 0.'], 'uri': 'https://livetex-widget.nanotech42.com/js/ui.js?v=7.1.362', 'pos': '1:481815'}, {'level': 'warn', 'log': [' https://balancer-cloud.livetex.ru/get-server/?site_id=154580&__fallback__&=&_m=GET&_c=njr_1_callback&_t=jsonp&_rnd=4scxgu7m96k&_h[lt-origin]=account%3A222283%3Asite%3A154580 , MIME- (text/plain) JavaScript.'], 'uri': 'https://livetex-widget.nanotech42.com/js/iframe.html', 'pos': '0:0'}, {'level': 'warn', 'log': [' <script> https://io5-production-3-ltx242.livetex.ru/visitor/auth?__fallback__&=&_m=POST&_c=njr_2_callback&_t=jsonp&_=%7B%22is_mobile%22%3Afalse%7D&_rnd=65nd8g7gt0n&_h[lt-origin]=account%3A222283%3Asite%3A154580 .'], 'uri': 'https://livetex-widget.nanotech42.com/js/iframe.html', 'pos': '1:1'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ws_chat.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:549622'}, {'level': 'error', 'log': ['Firefox wss://olymptrade.com/ds/v4.'], 'uri': 'https://cdn1.olymptrade.com/1.0.2187/public/js/platformBinary.e1776ab7.js', 'pos': '1:921675'}, {'level': 'error', 'log': ['uncaught exception: Object'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': [' onmozfullscreenchange .'], 'uri': 'https://olymptrade.com/platform', 'pos': '0:0'}, {'level': 'warn', 'log': [' onmozfullscreenerror .'], 'uri': 'https://olymptrade.com/platform', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_reli | olymptrade com see bug description url browser version firefox operating system linux tested another browser yes problem type something else description charts on the trading site won t load steps to reproduce this is a binary options trading site today the charts could not be loaded displayed in opera they are loaded browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel aurora hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false console messages uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level warn log uri pos level warn log account mime text plain javascript uri pos level warn log account uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level error log uri pos level warn log uri pos level warn log uri pos from with ❤️ | 0 |
181,654 | 14,072,873,509 | IssuesEvent | 2020-11-04 03:05:11 | red/red | https://api.github.com/repos/red/red | closed | Linux->Windows cross-compilation doesn't work | status.built status.tested type.bug | **Describe the bug**
Wanted to explore this opportunity because apparently R2 for Linux compiles about ~10% faster than R2 for Windows...
```
-=== Red Compiler 0.6.4 ===-
Compiling /home/test/1/3.red ...
...compilation time : 901 ms
Target: MSDOS
Compiling to native code...
*** Red/System Compiler Internal Error: Script Error : int-ptr! has no value
*** Where: none
*** Near: [file-sum: make struct! int-ptr! [0]]
```
**To reproduce**
1. `echo Red [] print \"windows\">3.red`
2. `red -r -e -t MSDOS 3.red`
(or -t Windows but that requires needs: view in the header)
**Expected behavior**
Compiles.
**Platform version**
```
Red 0.6.4 for Linux built 1-Nov-2020/23:51:29+03:00 commit #2d05900
```
| 1.0 | Linux->Windows cross-compilation doesn't work - **Describe the bug**
Wanted to explore this opportunity because apparently R2 for Linux compiles about ~10% faster than R2 for Windows...
```
-=== Red Compiler 0.6.4 ===-
Compiling /home/test/1/3.red ...
...compilation time : 901 ms
Target: MSDOS
Compiling to native code...
*** Red/System Compiler Internal Error: Script Error : int-ptr! has no value
*** Where: none
*** Near: [file-sum: make struct! int-ptr! [0]]
```
**To reproduce**
1. `echo Red [] print \"windows\">3.red`
2. `red -r -e -t MSDOS 3.red`
(or -t Windows but that requires needs: view in the header)
**Expected behavior**
Compiles.
**Platform version**
```
Red 0.6.4 for Linux built 1-Nov-2020/23:51:29+03:00 commit #2d05900
```
| non_reli | linux windows cross compilation doesn t work describe the bug wanted to explore this opportunity because apparently for linux compiles about faster than for windows red compiler compiling home test red compilation time ms target msdos compiling to native code red system compiler internal error script error int ptr has no value where none near to reproduce echo red print windows red red r e t msdos red or t windows but that requires needs view in the header expected behavior compiles platform version red for linux built nov commit | 0 |
3,631 | 3,509,383,387 | IssuesEvent | 2016-01-08 22:26:51 | godotengine/godot | https://api.github.com/repos/godotengine/godot | opened | GridMap is outdated when compared to TileMap | enhancement topic:core usability | There is no way to set friction for GridMap, or set physic layer/mask. The missing properties are in the 'Collision' section of TileMap:

| True | GridMap is outdated when compared to TileMap - There is no way to set friction for GridMap, or set physic layer/mask. The missing properties are in the 'Collision' section of TileMap:

| non_reli | gridmap is outdated when compared to tilemap there is no way to set friction for gridmap or set physic layer mask the missing properties are in the collision section of tilemap | 0 |
77,258 | 7,569,666,431 | IssuesEvent | 2018-04-23 06:00:47 | backdrop/backdrop-issues | https://api.github.com/repos/backdrop/backdrop-issues | closed | Use new JavaScript to detect timezone | pr - reviewed & tested by the community status - has pull request type - feature request | ## Describe your issue or idea
Modern JavaScript can now detect timezones directly from the browser, rather than just getting the offset from GMT. We should use this increased accuracy method to set the site and user timezones when displaying a timezone element.
~Because we can more accurately detect timezone, we should also then hide the timezone from the installer. Similar to Clean URLs detection: if we can accurately determine it through JavaScript, hide it from the form as a hidden element. Of course if needed, the timezone can always be changed later, and removing the timezone reduces friction in site configuration~
To limit the scope of this issue and get the clear improvement in, we'll just worry about adding the new JavaScript and leave the removal of the timezone for later (if at all).
### Steps to reproduce (if reporting a bug)
- Set your site timezone to an unpopular location that matches a major one. For example, I set my timezone to Phoenix, AZ as it has different daylight savings rules than the rest of the US.
- Install Backdrop, during the installer, note instead of "America/Phoenix", the timezone selected is "America/Los_Angeles" (in the winter) OR "America/Denver" (in the summer). Neither is the correct timezone as those areas have different daylight savings rules than Phoenix.
### Actual behavior (if reporting a bug)
- The wrong timezone is selected
### Expected behavior (if reporting a bug)
- The right timezone, matching my operating system should be selected.
- And we should ultimately remove this field entirely, once we know that the right value is being prepopulated.
---
PR by @quicksketch: https://github.com/backdrop/backdrop/pull/2127
| 1.0 | Use new JavaScript to detect timezone - ## Describe your issue or idea
Modern JavaScript can now detect timezones directly from the browser, rather than just getting the offset from GMT. We should use this increased accuracy method to set the site and user timezones when displaying a timezone element.
~Because we can more accurately detect timezone, we should also then hide the timezone from the installer. Similar to Clean URLs detection: if we can accurately determine it through JavaScript, hide it from the form as a hidden element. Of course if needed, the timezone can always be changed later, and removing the timezone reduces friction in site configuration~
To limit the scope of this issue and get the clear improvement in, we'll just worry about adding the new JavaScript and leave the removal of the timezone for later (if at all).
### Steps to reproduce (if reporting a bug)
- Set your site timezone to an unpopular location that matches a major one. For example, I set my timezone to Phoenix, AZ as it has different daylight savings rules than the rest of the US.
- Install Backdrop, during the installer, note instead of "America/Phoenix", the timezone selected is "America/Los_Angeles" (in the winter) OR "America/Denver" (in the summer). Neither is the correct timezone as those areas have different daylight savings rules than Phoenix.
### Actual behavior (if reporting a bug)
- The wrong timezone is selected
### Expected behavior (if reporting a bug)
- The right timezone, matching my operating system should be selected.
- And we should ultimately remove this field entirely, once we know that the right value is being prepopulated.
---
PR by @quicksketch: https://github.com/backdrop/backdrop/pull/2127
| non_reli | use new javascript to detect timezone describe your issue or idea modern javascript can now detect timezones directly from the browser rather than just getting the offset from gmt we should use this increased accuracy method to set the site and user timezones when displaying a timezone element because we can more accurately detect timezone we should also then hide the timezone from the installer similar to clean urls detection if we can accurately determine it through javascript hide it from the form as a hidden element of course if needed the timezone can always be changed later and removing the timezone reduces friction in site configuration to limit the scope of this issue and get the clear improvement in we ll just worry about adding the new javascript and leave the removal of the timezone for later if at all steps to reproduce if reporting a bug set your site timezone to an unpopular location that matches a major one for example i set my timezone to phoenix az as it has different daylight savings rules than the rest of the us install backdrop during the installer note instead of america phoenix the timezone selected is america los angeles in the winter or america denver in the summer neither is the correct timezone as those areas have different daylight savings rules than phoenix actual behavior if reporting a bug the wrong timezone is selected expected behavior if reporting a bug the right timezone matching my operating system should be selected and we should ultimately remove this field entirely once we know that the right value is being prepopulated pr by quicksketch | 0 |
681,713 | 23,321,031,748 | IssuesEvent | 2022-08-08 16:23:34 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | Once per battle cards appear in Next Up even after they've been used | reported in launch v1.0 C - Rare/requires weird actions 3 - Has Positive/Neutral Effects Priority | Noticed with Encore finale card, used it to take another turn and then I cycled my deck. It was in the Next Up section despite not actually being drawn since it is once per battle. | 1.0 | Once per battle cards appear in Next Up even after they've been used - Noticed with Encore finale card, used it to take another turn and then I cycled my deck. It was in the Next Up section despite not actually being drawn since it is once per battle. | non_reli | once per battle cards appear in next up even after they ve been used noticed with encore finale card used it to take another turn and then i cycled my deck it was in the next up section despite not actually being drawn since it is once per battle | 0 |
269,688 | 23,459,219,502 | IssuesEvent | 2022-08-16 11:42:10 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | Release 4.3.7 - Release Candidate 1 - E2E UX tests - Wazuh Dashboard | team/qa subteam/qa-storm type/manual-testing | The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Modules tests information
|||
|----------------------------------|------ |
| **Main release candidate issue** | [#14188](https://github.com/wazuh/wazuh/issues/) |
| **Main E2E UX test issue** | [#14260](https://github.com/wazuh/wazuh/issues/) |
| **Version** | 4.3.7 |
| **Release candidate #** | RC1 |
| **Tag** | [v4.3.7-rc1](https://github.com/wazuh/wazuh/tree/v4.3.7-rc1) |
| **Previous modules tests issue** | |
## Installation procedure
- Wazuh Indexer
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Server
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Dashboard
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Agent
- Wazuh WUI one-liner deploy IP GROUP (created beforehand)
## Test description
Best efford to test Wazuh dashboard package. Think critically and at least review/test:
- [ ] [Wazuh dashboard package specs](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167046213)
- [ ] [Dashboard package size](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167049008)
- [ ] [Dashboard package metadata (description)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167054484)
- [ ] [Dashboard package digital signature](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167055864)
- [ ] [Installed files location, size and permissions](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167056232)
- [ ] [Installation footprint (check that no unnecessary files are modified/broken in the file system. For example that operating system files do keep their right owner/pemissions and that the installer did not break the system.)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167084021)
- [ ] [Installed service (test that it works correctly)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167105075)
- [ ] [Wazuh Dashboard logs when installed](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167023715)
- [ ] [Wazuh Dashboard configuration (Try to find anomalies compared with 4.3.4)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167113724)
- [ ] [Wazuh Dashboard (included the Wazuh WUI) communication with Wazuh manager API and Wazuh indexer](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167131188)
- [ ] [Register Wazuh Agents](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167152027)
- [ ] [Basic browsing throguh the WUI](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167224904)
- [ ] [Basic experience with WUI performance.](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167248521)
- [ ] Anything else that could have been overlooked when creating the new package
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause.
An extended report of the test results can be attached as a ZIP or TXT file. Please attach any documents, screenshots, or tables to the issue update with the results. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found in the issue updates.
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| ⚫ | Wazuh dashboard package specs | Functional |
| ⚫ | Dashboard package size | Functional |
| ⚫ | Dashboard package metadata (description) | Usability |
| ⚫ | Dashboard package digital signature | Usability |
| ⚫ | Installed files location, size and permissions | Functional |
| ⚫ | Installation footprint | Functional |
| ⚫ | Wazuh Dashboard logs when installed | Functional |
| ⚫ | Wazuh Dashboard configuration | Functional |
| ⚫ | Wazuh Dashboard (included the Wazuh WUI) communication with Wazuh manager API and Wazuh indexer | Functional |
| ⚫ | Register Wazuh Agents | Functional |
| ⚫ | Basic browsing through the WUI | Usability |
| ⚫ | Basic experience with WUI performance | Usability |
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [ ]
| 1.0 | Release 4.3.7 - Release Candidate 1 - E2E UX tests - Wazuh Dashboard - The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Modules tests information
|||
|----------------------------------|------ |
| **Main release candidate issue** | [#14188](https://github.com/wazuh/wazuh/issues/) |
| **Main E2E UX test issue** | [#14260](https://github.com/wazuh/wazuh/issues/) |
| **Version** | 4.3.7 |
| **Release candidate #** | RC1 |
| **Tag** | [v4.3.7-rc1](https://github.com/wazuh/wazuh/tree/v4.3.7-rc1) |
| **Previous modules tests issue** | |
## Installation procedure
- Wazuh Indexer
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Server
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Dashboard
- [Step by Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)
- Wazuh Agent
- Wazuh WUI one-liner deploy IP GROUP (created beforehand)
## Test description
Best efford to test Wazuh dashboard package. Think critically and at least review/test:
- [ ] [Wazuh dashboard package specs](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167046213)
- [ ] [Dashboard package size](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167049008)
- [ ] [Dashboard package metadata (description)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167054484)
- [ ] [Dashboard package digital signature](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167055864)
- [ ] [Installed files location, size and permissions](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167056232)
- [ ] [Installation footprint (check that no unnecessary files are modified/broken in the file system. For example that operating system files do keep their right owner/pemissions and that the installer did not break the system.)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167084021)
- [ ] [Installed service (test that it works correctly)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167105075)
- [ ] [Wazuh Dashboard logs when installed](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167023715)
- [ ] [Wazuh Dashboard configuration (Try to find anomalies compared with 4.3.4)](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167113724)
- [ ] [Wazuh Dashboard (included the Wazuh WUI) communication with Wazuh manager API and Wazuh indexer](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167131188)
- [ ] [Register Wazuh Agents](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167152027)
- [ ] [Basic browsing throguh the WUI](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167224904)
- [ ] [Basic experience with WUI performance.](https://github.com/wazuh/wazuh/issues/14051#issuecomment-1167248521)
- [ ] Anything else that could have been overlooked when creating the new package
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause.
An extended report of the test results can be attached as a ZIP or TXT file. Please attach any documents, screenshots, or tables to the issue update with the results. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found in the issue updates.
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| ⚫ | Wazuh dashboard package specs | Functional |
| ⚫ | Dashboard package size | Functional |
| ⚫ | Dashboard package metadata (description) | Usability |
| ⚫ | Dashboard package digital signature | Usability |
| ⚫ | Installed files location, size and permissions | Functional |
| ⚫ | Installation footprint | Functional |
| ⚫ | Wazuh Dashboard logs when installed | Functional |
| ⚫ | Wazuh Dashboard configuration | Functional |
| ⚫ | Wazuh Dashboard (included the Wazuh WUI) communication with Wazuh manager API and Wazuh indexer | Functional |
| ⚫ | Register Wazuh Agents | Functional |
| ⚫ | Basic browsing through the WUI | Usability |
| ⚫ | Basic experience with WUI performance | Usability |
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [ ]
| non_reli | release release candidate ux tests wazuh dashboard the following issue aims to run the specified test for the current release candidate report the results and open new issues for any encountered errors modules tests information main release candidate issue main ux test issue version release candidate tag previous modules tests issue installation procedure wazuh indexer wazuh server wazuh dashboard wazuh agent wazuh wui one liner deploy ip group created beforehand test description best efford to test wazuh dashboard package think critically and at least review test anything else that could have been overlooked when creating the new package test report procedure all test results must have one of the following statuses green circle all checks passed red circle there is at least one failed result yellow circle there is at least one expected failure or skipped test and no failures any failing test must be properly addressed with a new issue detailing the error and the possible cause an extended report of the test results can be attached as a zip or txt file please attach any documents screenshots or tables to the issue update with the results this report can be used by the auditors to dig deeper into any possible failures and details conclusions all tests have been executed and the results can be found in the issue updates status test failure type notes ⚫ wazuh dashboard package specs functional ⚫ dashboard package size functional ⚫ dashboard package metadata description usability ⚫ dashboard package digital signature usability ⚫ installed files location size and permissions functional ⚫ installation footprint functional ⚫ wazuh dashboard logs when installed functional ⚫ wazuh dashboard configuration functional ⚫ wazuh dashboard included the wazuh wui communication with wazuh manager api and wazuh indexer functional ⚫ register wazuh agents functional ⚫ basic browsing through the wui usability ⚫ basic experience with wui performance usability auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue | 0 |
260,630 | 19,679,330,167 | IssuesEvent | 2022-01-11 15:22:10 | elastic/ecs | https://api.github.com/repos/elastic/ecs | closed | Canvas representation of ECS fields | documentation | @MikePaquette created a Canvas representation of our ECS fields and offered that it might be helpful to link to it from an issue in this repo in order to make it accessible, later. I'm creating this issue so that we don't lose sight of this resource. | 1.0 | Canvas representation of ECS fields - @MikePaquette created a Canvas representation of our ECS fields and offered that it might be helpful to link to it from an issue in this repo in order to make it accessible, later. I'm creating this issue so that we don't lose sight of this resource. | non_reli | canvas representation of ecs fields mikepaquette created a canvas representation of our ecs fields and offered that it might be helpful to link to it from an issue in this repo in order to make it accessible later i m creating this issue so that we don t lose sight of this resource | 0 |
1,333 | 15,053,953,450 | IssuesEvent | 2021-02-03 16:51:35 | microsoft/VFSForGit | https://api.github.com/repos/microsoft/VFSForGit | closed | `gvfs clone` failing to find `GVFS.Hooks.exe` | affects: mount-reliability type: bug | Some users are struggling with `gvfs clone` with a warning that `GVFS.Hooks.exe` is missing. | True | `gvfs clone` failing to find `GVFS.Hooks.exe` - Some users are struggling with `gvfs clone` with a warning that `GVFS.Hooks.exe` is missing. | reli | gvfs clone failing to find gvfs hooks exe some users are struggling with gvfs clone with a warning that gvfs hooks exe is missing | 1 |
214,432 | 24,077,699,103 | IssuesEvent | 2022-09-19 01:01:30 | AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches | https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches | opened | CVE-2022-36015 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2022-36015 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `RangeSize` receives values that do not fit into an `int64_t`, it crashes. We have patched the issue in GitHub commit 37e64539cd29fcfb814c4451152a60f5d107b0f0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-36015>CVE-2022-36015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rh87-q4vg-m45j">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rh87-q4vg-m45j</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-36015 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-36015 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `RangeSize` receives values that do not fit into an `int64_t`, it crashes. We have patched the issue in GitHub commit 37e64539cd29fcfb814c4451152a60f5d107b0f0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-36015>CVE-2022-36015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rh87-q4vg-m45j">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-rh87-q4vg-m45j</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning when rangesize receives values that do not fit into an t it crashes we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend | 0 |
358,089 | 25,177,072,422 | IssuesEvent | 2022-11-11 10:13:09 | vives-dust/framework | https://api.github.com/repos/vives-dust/framework | closed | Join documents based on id's | documentation enhancement | https://github.com/feathersjs-ecosystem/feathers-mongoose#populate
Apperently you need to whitelist this. Already did this for necessary models.
Here is some test code for devicesensors intersection (works):
```ts
import { Service, MongooseServiceOptions } from 'feathers-mongoose';
import { Application } from '../../declarations';
import { Id, NullableId, Params } from '@feathersjs/feathers';
export class Devicesensors<T = any> extends Service {
//eslint-disable-next-line @typescript-eslint/no-unused-vars
constructor(options: Partial<MongooseServiceOptions>, app: Application) {
super(options);
}
async get(id: Id, params: Params): Promise<T> {
return super.get(id, {
// Join documents based on id's
query: { $populate: ['sensortype_id', 'devicetype_id'] }
})
}
}
``` | 1.0 | Join documents based on id's - https://github.com/feathersjs-ecosystem/feathers-mongoose#populate
Apperently you need to whitelist this. Already did this for necessary models.
Here is some test code for devicesensors intersection (works):
```ts
import { Service, MongooseServiceOptions } from 'feathers-mongoose';
import { Application } from '../../declarations';
import { Id, NullableId, Params } from '@feathersjs/feathers';
export class Devicesensors<T = any> extends Service {
//eslint-disable-next-line @typescript-eslint/no-unused-vars
constructor(options: Partial<MongooseServiceOptions>, app: Application) {
super(options);
}
async get(id: Id, params: Params): Promise<T> {
return super.get(id, {
// Join documents based on id's
query: { $populate: ['sensortype_id', 'devicetype_id'] }
})
}
}
``` | non_reli | join documents based on id s apperently you need to whitelist this already did this for necessary models here is some test code for devicesensors intersection works ts import service mongooseserviceoptions from feathers mongoose import application from declarations import id nullableid params from feathersjs feathers export class devicesensors extends service eslint disable next line typescript eslint no unused vars constructor options partial app application super options async get id id params params promise return super get id join documents based on id s query populate | 0 |
64,619 | 6,912,899,567 | IssuesEvent | 2017-11-28 13:40:28 | ckeditor/ckeditor-dev | https://api.github.com/repos/ckeditor/ckeditor-dev | opened | Failing PFW nested_list test on built version | type:failingtest | ## Are you reporting a feature request or a bug?
Failing test
## Provide detailed reproduction steps (if any)
After building CKEditor test `/plugins/pastefromword/generated/nested_list` is failing. It seems like full preset allows too many plugins and some unwanted styling is present in markup.
| 1.0 | Failing PFW nested_list test on built version - ## Are you reporting a feature request or a bug?
Failing test
## Provide detailed reproduction steps (if any)
After building CKEditor test `/plugins/pastefromword/generated/nested_list` is failing. It seems like full preset allows too many plugins and some unwanted styling is present in markup.
| non_reli | failing pfw nested list test on built version are you reporting a feature request or a bug failing test provide detailed reproduction steps if any after building ckeditor test plugins pastefromword generated nested list is failing it seems like full preset allows too many plugins and some unwanted styling is present in markup | 0 |
177 | 5,027,162,964 | IssuesEvent | 2016-12-15 14:50:24 | LeastAuthority/leastauthority.com | https://api.github.com/repos/LeastAuthority/leastauthority.com | closed | Signup process can complete without usable grid due to S3 bucket creation failure | bug reliability signup | If the attempt to create the grid's S3 bucket fails (for example, because `<S3Error object with Error code: SlowDown>`), the signup process continues. The end result is a grid that is configured to use an S3 bucket that doesn't actually exist. This results in an error when trying to upload any shares to the grid. | True | Signup process can complete without usable grid due to S3 bucket creation failure - If the attempt to create the grid's S3 bucket fails (for example, because `<S3Error object with Error code: SlowDown>`), the signup process continues. The end result is a grid that is configured to use an S3 bucket that doesn't actually exist. This results in an error when trying to upload any shares to the grid. | reli | signup process can complete without usable grid due to bucket creation failure if the attempt to create the grid s bucket fails for example because the signup process continues the end result is a grid that is configured to use an bucket that doesn t actually exist this results in an error when trying to upload any shares to the grid | 1 |
2,394 | 25,128,021,917 | IssuesEvent | 2022-11-09 13:14:56 | Azure/PSRule.Rules.Azure | https://api.github.com/repos/Azure/PSRule.Rules.Azure | closed | Azure Database for MySQL should have backup configured | rule: mysql pillar: reliability | # Rule request
## Suggested rule change
Azure Database for MySQL should have backups of the data files and the transaction log.
<!-- A clear and concise description of the what the rule should check and why. -->
## Applies to the following
The rule applies to the following:
- Resource type: **[Microsoft.DBforMySQL/servers]**
## Additional context
Lets use the `Reliability` pillar for this one.
- [Backup and restore in Azure Database for MySQL](https://learn.microsoft.com/azure/mysql/single-server/concepts-backup)
- [Azure template reference](https://learn.microsoft.com/azure/templates/microsoft.dbformysql/servers) | True | Azure Database for MySQL should have backup configured - # Rule request
## Suggested rule change
Azure Database for MySQL should have backups of the data files and the transaction log.
<!-- A clear and concise description of the what the rule should check and why. -->
## Applies to the following
The rule applies to the following:
- Resource type: **[Microsoft.DBforMySQL/servers]**
## Additional context
Lets use the `Reliability` pillar for this one.
- [Backup and restore in Azure Database for MySQL](https://learn.microsoft.com/azure/mysql/single-server/concepts-backup)
- [Azure template reference](https://learn.microsoft.com/azure/templates/microsoft.dbformysql/servers) | reli | azure database for mysql should have backup configured rule request suggested rule change azure database for mysql should have backups of the data files and the transaction log applies to the following the rule applies to the following resource type additional context lets use the reliability pillar for this one | 1 |
912 | 11,582,167,044 | IssuesEvent | 2020-02-22 01:36:41 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | C#: Typing goto case (ValueTuple<,>) in switch statement crashes Visual Studio | Area-Compilers Bug Tenet-Reliability | **Version Used**:
VS2019 16.4.2
.NET Core 3.1
**Steps to Reproduce**:
1. Use the following code
```c#
static void Example(object a, object b)
{
switch ((a, b))
{
case (string str, int[] arr) _:
break;
case (string str, decimal[] arr) _:
break;
}
}
```
2. Copy/paste ``goto case (string str, decimal[] arr)`` before the first ``break;``
**Expected Behavior**:
The goto case statement might not be valid, but Visual Studio should keep running.
**Actual Behavior**:
VS2019 16.4.2 crashes.
**Exception info from Windows event log**:
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.InvalidOperationException: This program location is thought to be unreachable.
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeconstructionVariable(TypeWithAnnotations declTypeWithAnnotations, SingleVariableDesignationSyntax designation, CSharpSyntaxNode syntax, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationVariablesForErrorRecovery(TypeWithAnnotations declTypeWithAnnotations, VariableDesignationSyntax node, CSharpSyntaxNode syntax, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationExpressionAsError(DeclarationExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindValue(ExpressionSyntax node, DiagnosticBag diagnostics, BindValueKind valueKind)
at Microsoft.CodeAnalysis.CSharp.Binder.BindTupleExpression(TupleExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindNamespaceOrTypeOrExpression(ExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.Bind(Binder binder, CSharpSyntaxNode node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.MethodBodySemanticModel.Bind(Binder binder, CSharpSyntaxNode node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetBoundNodes(CSharpSyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetLowerBoundNode(CSharpSyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetTypeOfTupleLiteral(TupleExpressionSyntax declaratorSyntax)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetDeclaredSymbol(TupleExpressionSyntax declaratorSyntax, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.SyntaxTreeSemanticModel.GetDeclaredSymbol(TupleExpressionSyntax declaratorSyntax, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolCore(SyntaxNode node, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolsCore(SyntaxNode declaration, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.IsDiagnosticSuppressed(String id, Location location, Func`3 getSemanticModel, SuppressMessageInfo& info)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.ApplySourceSuppressions(Diagnostic diagnostic, Func`3 getSemanticModel, ISymbol symbolOpt)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSource(ImmutableArray`1 diagnostics, Compilation compilation, SuppressMessageAttributeState suppressMessageState, Func`3 getSemanticModel)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSourceOrByAnalyzers(ImmutableArray`1 diagnostics, Compilation compilation)
at Microsoft.CodeAnalysis.Diagnostics.AnalysisResultBuilder.ApplySuppressionsAndStoreAnalysisResult(AnalysisScope analysisScope, AnalyzerDriver driver, Compilation compilation, Func`2 getAnalyzerActionCounts, Boolean fullAnalysisResultForAnalyzersInScope)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers.<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.FatalError.ReportUnlessCanceled(System.Exception)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeconstructionVariable(Microsoft.CodeAnalysis.CSharp.Symbols.TypeWithAnnotations, Microsoft.CodeAnalysis.CSharp.Syntax.SingleVariableDesignationSyntax, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationVariablesForErrorRecovery(Microsoft.CodeAnalysis.CSharp.Symbols.TypeWithAnnotations, Microsoft.CodeAnalysis.CSharp.Syntax.VariableDesignationSyntax, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationExpressionAsError(Microsoft.CodeAnalysis.CSharp.Syntax.DeclarationExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindValue(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, BindValueKind)
at Microsoft.CodeAnalysis.CSharp.Binder.BindTupleExpression(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindNamespaceOrTypeOrExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.Bind(Microsoft.CodeAnalysis.CSharp.Binder, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.MethodBodySemanticModel.Bind(Microsoft.CodeAnalysis.CSharp.Binder, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetBoundNodes(Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetLowerBoundNode(Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetTypeOfTupleLiteral(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetDeclaredSymbol(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.SyntaxTreeSemanticModel.GetDeclaredSymbol(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolCore(Microsoft.CodeAnalysis.SyntaxNode, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolsCore(Microsoft.CodeAnalysis.SyntaxNode, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.IsDiagnosticSuppressed(System.String, Microsoft.CodeAnalysis.Location, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>, Microsoft.CodeAnalysis.Diagnostics.SuppressMessageInfo ByRef)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.ApplySourceSuppressions(Microsoft.CodeAnalysis.Diagnostic, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>, Microsoft.CodeAnalysis.ISymbol)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSource(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.Diagnostic>, Microsoft.CodeAnalysis.Compilation, Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSourceOrByAnalyzers(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.Diagnostic>, Microsoft.CodeAnalysis.Compilation)
at Microsoft.CodeAnalysis.Diagnostics.AnalysisResultBuilder.ApplySuppressionsAndStoreAnalysisResult(Microsoft.CodeAnalysis.Diagnostics.AnalysisScope, Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver, Microsoft.CodeAnalysis.Compilation, System.Func`2<Microsoft.CodeAnalysis.Diagnostics.DiagnosticAnalyzer,Microsoft.CodeAnalysis.Diagnostics.Telemetry.AnalyzerActionCounts>, Boolean)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64, Microsoft.CodeAnalysis, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeAnalyzerDiagnosticsCoreAsync>d__64 ByRef)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers.ComputeAnalyzerDiagnosticsCoreAsync(Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver, Microsoft.CodeAnalysis.Diagnostics.AsyncQueue`1<Microsoft.CodeAnalysis.Diagnostics.CompilationEvent>, Microsoft.CodeAnalysis.Diagnostics.AnalysisScope, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1+<<ComputeAnalyzerDiagnosticsAsync>b__1>d.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1+<<ComputeAnalyzerDiagnosticsAsync>b__1>d, Microsoft.CodeAnalysis, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<<ComputeAnalyzerDiagnosticsAsync>b__1>d ByRef)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1.<ComputeAnalyzerDiagnosticsAsync>b__1()
at System.Threading.Tasks.Task`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
``` | True | C#: Typing goto case (ValueTuple<,>) in switch statement crashes Visual Studio - **Version Used**:
VS2019 16.4.2
.NET Core 3.1
**Steps to Reproduce**:
1. Use the following code
```c#
static void Example(object a, object b)
{
switch ((a, b))
{
case (string str, int[] arr) _:
break;
case (string str, decimal[] arr) _:
break;
}
}
```
2. Copy/paste ``goto case (string str, decimal[] arr)`` before the first ``break;``
**Expected Behavior**:
The goto case statement might not be valid, but Visual Studio should keep running.
**Actual Behavior**:
VS2019 16.4.2 crashes.
**Exception info from Windows event log**:
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.InvalidOperationException: This program location is thought to be unreachable.
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeconstructionVariable(TypeWithAnnotations declTypeWithAnnotations, SingleVariableDesignationSyntax designation, CSharpSyntaxNode syntax, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationVariablesForErrorRecovery(TypeWithAnnotations declTypeWithAnnotations, VariableDesignationSyntax node, CSharpSyntaxNode syntax, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationExpressionAsError(DeclarationExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindValue(ExpressionSyntax node, DiagnosticBag diagnostics, BindValueKind valueKind)
at Microsoft.CodeAnalysis.CSharp.Binder.BindTupleExpression(TupleExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(ExpressionSyntax node, DiagnosticBag diagnostics, Boolean invoked, Boolean indexed)
at Microsoft.CodeAnalysis.CSharp.Binder.BindNamespaceOrTypeOrExpression(ExpressionSyntax node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.Bind(Binder binder, CSharpSyntaxNode node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.MethodBodySemanticModel.Bind(Binder binder, CSharpSyntaxNode node, DiagnosticBag diagnostics)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetBoundNodes(CSharpSyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetLowerBoundNode(CSharpSyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetTypeOfTupleLiteral(TupleExpressionSyntax declaratorSyntax)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetDeclaredSymbol(TupleExpressionSyntax declaratorSyntax, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.SyntaxTreeSemanticModel.GetDeclaredSymbol(TupleExpressionSyntax declaratorSyntax, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolCore(SyntaxNode node, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolsCore(SyntaxNode declaration, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.IsDiagnosticSuppressed(String id, Location location, Func`3 getSemanticModel, SuppressMessageInfo& info)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.ApplySourceSuppressions(Diagnostic diagnostic, Func`3 getSemanticModel, ISymbol symbolOpt)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSource(ImmutableArray`1 diagnostics, Compilation compilation, SuppressMessageAttributeState suppressMessageState, Func`3 getSemanticModel)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSourceOrByAnalyzers(ImmutableArray`1 diagnostics, Compilation compilation)
at Microsoft.CodeAnalysis.Diagnostics.AnalysisResultBuilder.ApplySuppressionsAndStoreAnalysisResult(AnalysisScope analysisScope, AnalyzerDriver driver, Compilation compilation, Func`2 getAnalyzerActionCounts, Boolean fullAnalysisResultForAnalyzersInScope)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers.<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.FatalError.ReportUnlessCanceled(System.Exception)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeconstructionVariable(Microsoft.CodeAnalysis.CSharp.Symbols.TypeWithAnnotations, Microsoft.CodeAnalysis.CSharp.Syntax.SingleVariableDesignationSyntax, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationVariablesForErrorRecovery(Microsoft.CodeAnalysis.CSharp.Symbols.TypeWithAnnotations, Microsoft.CodeAnalysis.CSharp.Syntax.VariableDesignationSyntax, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindDeclarationExpressionAsError(Microsoft.CodeAnalysis.CSharp.Syntax.DeclarationExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindValue(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, BindValueKind)
at Microsoft.CodeAnalysis.CSharp.Binder.BindTupleExpression(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpressionInternal(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag, Boolean, Boolean)
at Microsoft.CodeAnalysis.CSharp.Binder.BindNamespaceOrTypeOrExpression(Microsoft.CodeAnalysis.CSharp.Syntax.ExpressionSyntax, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.Bind(Microsoft.CodeAnalysis.CSharp.Binder, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.MethodBodySemanticModel.Bind(Microsoft.CodeAnalysis.CSharp.Binder, Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode, Microsoft.CodeAnalysis.DiagnosticBag)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetBoundNodes(Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetLowerBoundNode(Microsoft.CodeAnalysis.CSharp.CSharpSyntaxNode)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetTypeOfTupleLiteral(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax)
at Microsoft.CodeAnalysis.CSharp.MemberSemanticModel.GetDeclaredSymbol(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.SyntaxTreeSemanticModel.GetDeclaredSymbol(Microsoft.CodeAnalysis.CSharp.Syntax.TupleExpressionSyntax, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolCore(Microsoft.CodeAnalysis.SyntaxNode, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.CSharp.CSharpSemanticModel.GetDeclaredSymbolsCore(Microsoft.CodeAnalysis.SyntaxNode, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.IsDiagnosticSuppressed(System.String, Microsoft.CodeAnalysis.Location, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>, Microsoft.CodeAnalysis.Diagnostics.SuppressMessageInfo ByRef)
at Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState.ApplySourceSuppressions(Microsoft.CodeAnalysis.Diagnostic, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>, Microsoft.CodeAnalysis.ISymbol)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSource(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.Diagnostic>, Microsoft.CodeAnalysis.Compilation, Microsoft.CodeAnalysis.Diagnostics.SuppressMessageAttributeState, System.Func`3<Microsoft.CodeAnalysis.Compilation,Microsoft.CodeAnalysis.SyntaxTree,Microsoft.CodeAnalysis.SemanticModel>)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver.FilterDiagnosticsSuppressedInSourceOrByAnalyzers(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.Diagnostic>, Microsoft.CodeAnalysis.Compilation)
at Microsoft.CodeAnalysis.Diagnostics.AnalysisResultBuilder.ApplySuppressionsAndStoreAnalysisResult(Microsoft.CodeAnalysis.Diagnostics.AnalysisScope, Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver, Microsoft.CodeAnalysis.Compilation, System.Func`2<Microsoft.CodeAnalysis.Diagnostics.DiagnosticAnalyzer,Microsoft.CodeAnalysis.Diagnostics.Telemetry.AnalyzerActionCounts>, Boolean)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<ComputeAnalyzerDiagnosticsCoreAsync>d__64, Microsoft.CodeAnalysis, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeAnalyzerDiagnosticsCoreAsync>d__64 ByRef)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers.ComputeAnalyzerDiagnosticsCoreAsync(Microsoft.CodeAnalysis.Diagnostics.AnalyzerDriver, Microsoft.CodeAnalysis.Diagnostics.AsyncQueue`1<Microsoft.CodeAnalysis.Diagnostics.CompilationEvent>, Microsoft.CodeAnalysis.Diagnostics.AnalysisScope, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1+<<ComputeAnalyzerDiagnosticsAsync>b__1>d.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1+<<ComputeAnalyzerDiagnosticsAsync>b__1>d, Microsoft.CodeAnalysis, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<<ComputeAnalyzerDiagnosticsAsync>b__1>d ByRef)
at Microsoft.CodeAnalysis.Diagnostics.CompilationWithAnalyzers+<>c__DisplayClass57_1.<ComputeAnalyzerDiagnosticsAsync>b__1()
at System.Threading.Tasks.Task`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
``` | reli | c typing goto case valuetuple in switch statement crashes visual studio version used net core steps to reproduce use the following code c static void example object a object b switch a b case string str int arr break case string str decimal arr break copy paste goto case string str decimal arr before the first break expected behavior the goto case statement might not be valid but visual studio should keep running actual behavior crashes exception info from windows event log application devenv exe framework version description the application requested process termination through system environment failfast string message message system invalidoperationexception this program location is thought to be unreachable at microsoft codeanalysis csharp binder binddeconstructionvariable typewithannotations decltypewithannotations singlevariabledesignationsyntax designation csharpsyntaxnode syntax diagnosticbag diagnostics at microsoft codeanalysis csharp binder binddeclarationvariablesforerrorrecovery typewithannotations decltypewithannotations variabledesignationsyntax node csharpsyntaxnode syntax diagnosticbag diagnostics at microsoft codeanalysis csharp binder binddeclarationexpressionaserror declarationexpressionsyntax node diagnosticbag diagnostics at microsoft codeanalysis csharp binder bindexpressioninternal expressionsyntax node diagnosticbag diagnostics boolean invoked boolean indexed at microsoft codeanalysis csharp binder bindexpression expressionsyntax node diagnosticbag diagnostics boolean invoked boolean indexed at microsoft codeanalysis csharp binder bindvalue expressionsyntax node diagnosticbag diagnostics bindvaluekind valuekind at microsoft codeanalysis csharp binder bindtupleexpression tupleexpressionsyntax node diagnosticbag diagnostics at microsoft codeanalysis csharp binder bindexpressioninternal expressionsyntax node diagnosticbag diagnostics boolean invoked boolean indexed at microsoft codeanalysis csharp binder bindexpression expressionsyntax node diagnosticbag diagnostics boolean invoked boolean indexed at microsoft codeanalysis csharp binder bindnamespaceortypeorexpression expressionsyntax node diagnosticbag diagnostics at microsoft codeanalysis csharp csharpsemanticmodel bind binder binder csharpsyntaxnode node diagnosticbag diagnostics at microsoft codeanalysis csharp methodbodysemanticmodel bind binder binder csharpsyntaxnode node diagnosticbag diagnostics at microsoft codeanalysis csharp membersemanticmodel getboundnodes csharpsyntaxnode node at microsoft codeanalysis csharp membersemanticmodel getlowerboundnode csharpsyntaxnode node at microsoft codeanalysis csharp membersemanticmodel gettypeoftupleliteral tupleexpressionsyntax declaratorsyntax at microsoft codeanalysis csharp membersemanticmodel getdeclaredsymbol tupleexpressionsyntax declaratorsyntax cancellationtoken cancellationtoken at microsoft codeanalysis csharp syntaxtreesemanticmodel getdeclaredsymbol tupleexpressionsyntax declaratorsyntax cancellationtoken cancellationtoken at microsoft codeanalysis csharp csharpsemanticmodel getdeclaredsymbolcore syntaxnode node cancellationtoken cancellationtoken at microsoft codeanalysis csharp csharpsemanticmodel getdeclaredsymbolscore syntaxnode declaration cancellationtoken cancellationtoken at microsoft codeanalysis diagnostics suppressmessageattributestate isdiagnosticsuppressed string id location location func getsemanticmodel suppressmessageinfo info at microsoft codeanalysis diagnostics suppressmessageattributestate applysourcesuppressions diagnostic diagnostic func getsemanticmodel isymbol symbolopt at microsoft codeanalysis diagnostics analyzerdriver filterdiagnosticssuppressedinsource immutablearray diagnostics compilation compilation suppressmessageattributestate suppressmessagestate func getsemanticmodel at microsoft codeanalysis diagnostics analyzerdriver filterdiagnosticssuppressedinsourceorbyanalyzers immutablearray diagnostics compilation compilation at microsoft codeanalysis diagnostics analysisresultbuilder applysuppressionsandstoreanalysisresult analysisscope analysisscope analyzerdriver driver compilation compilation func getanalyzeractioncounts boolean fullanalysisresultforanalyzersinscope at microsoft codeanalysis diagnostics compilationwithanalyzers d movenext stack at system environment failfast system string system exception at microsoft codeanalysis failfast onfatalexception system exception at microsoft codeanalysis fatalerror report system exception system action at microsoft codeanalysis fatalerror reportunlesscanceled system exception at microsoft codeanalysis diagnostics compilationwithanalyzers d movenext at microsoft codeanalysis csharp binder binddeconstructionvariable microsoft codeanalysis csharp symbols typewithannotations microsoft codeanalysis csharp syntax singlevariabledesignationsyntax microsoft codeanalysis csharp csharpsyntaxnode microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp binder binddeclarationvariablesforerrorrecovery microsoft codeanalysis csharp symbols typewithannotations microsoft codeanalysis csharp syntax variabledesignationsyntax microsoft codeanalysis csharp csharpsyntaxnode microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp binder binddeclarationexpressionaserror microsoft codeanalysis csharp syntax declarationexpressionsyntax microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp binder bindexpressioninternal microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag boolean boolean at microsoft codeanalysis csharp binder bindexpression microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag boolean boolean at microsoft codeanalysis csharp binder bindvalue microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag bindvaluekind at microsoft codeanalysis csharp binder bindtupleexpression microsoft codeanalysis csharp syntax tupleexpressionsyntax microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp binder bindexpressioninternal microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag boolean boolean at microsoft codeanalysis csharp binder bindexpression microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag boolean boolean at microsoft codeanalysis csharp binder bindnamespaceortypeorexpression microsoft codeanalysis csharp syntax expressionsyntax microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp csharpsemanticmodel bind microsoft codeanalysis csharp binder microsoft codeanalysis csharp csharpsyntaxnode microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp methodbodysemanticmodel bind microsoft codeanalysis csharp binder microsoft codeanalysis csharp csharpsyntaxnode microsoft codeanalysis diagnosticbag at microsoft codeanalysis csharp membersemanticmodel getboundnodes microsoft codeanalysis csharp csharpsyntaxnode at microsoft codeanalysis csharp membersemanticmodel getlowerboundnode microsoft codeanalysis csharp csharpsyntaxnode at microsoft codeanalysis csharp membersemanticmodel gettypeoftupleliteral microsoft codeanalysis csharp syntax tupleexpressionsyntax at microsoft codeanalysis csharp membersemanticmodel getdeclaredsymbol microsoft codeanalysis csharp syntax tupleexpressionsyntax system threading cancellationtoken at microsoft codeanalysis csharp syntaxtreesemanticmodel getdeclaredsymbol microsoft codeanalysis csharp syntax tupleexpressionsyntax system threading cancellationtoken at microsoft codeanalysis csharp csharpsemanticmodel getdeclaredsymbolcore microsoft codeanalysis syntaxnode system threading cancellationtoken at microsoft codeanalysis csharp csharpsemanticmodel getdeclaredsymbolscore microsoft codeanalysis syntaxnode system threading cancellationtoken at microsoft codeanalysis diagnostics suppressmessageattributestate isdiagnosticsuppressed system string microsoft codeanalysis location system func microsoft codeanalysis diagnostics suppressmessageinfo byref at microsoft codeanalysis diagnostics suppressmessageattributestate applysourcesuppressions microsoft codeanalysis diagnostic system func microsoft codeanalysis isymbol at microsoft codeanalysis diagnostics analyzerdriver filterdiagnosticssuppressedinsource system collections immutable immutablearray microsoft codeanalysis compilation microsoft codeanalysis diagnostics suppressmessageattributestate system func at microsoft codeanalysis diagnostics analyzerdriver filterdiagnosticssuppressedinsourceorbyanalyzers system collections immutable immutablearray microsoft codeanalysis compilation at microsoft codeanalysis diagnostics analysisresultbuilder applysuppressionsandstoreanalysisresult microsoft codeanalysis diagnostics analysisscope microsoft codeanalysis diagnostics analyzerdriver microsoft codeanalysis compilation system func boolean at microsoft codeanalysis diagnostics compilationwithanalyzers d movenext at system runtime compilerservices asynctaskmethodbuilder start d byref at microsoft codeanalysis diagnostics compilationwithanalyzers computeanalyzerdiagnosticscoreasync microsoft codeanalysis diagnostics analyzerdriver microsoft codeanalysis diagnostics asyncqueue microsoft codeanalysis diagnostics analysisscope system threading cancellationtoken at microsoft codeanalysis diagnostics compilationwithanalyzers c b d movenext at system runtime compilerservices asynctaskmethodbuilder start b d byref at microsoft codeanalysis diagnostics compilationwithanalyzers c b at system threading tasks task innerinvoke at system threading tasks task execute at system threading tasks task executioncontextcallback system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks task system threading ithreadpoolworkitem executeworkitem at system threading threadpoolworkqueue dispatch at system threading threadpoolwaitcallback performwaitcallback | 1 |
201 | 5,325,715,786 | IssuesEvent | 2017-02-15 00:45:51 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Processes may be leaked when docker are killed repeatedly in a short time frame | area/docker area/reliability sig/node | Forked from #37580
If docker gets killed repeated in a short time frame (while kubelet is running and trying to create containers), some container processes may get reparented to PID 1 and continue running, but no longer visible from the docker daemon.
This can be produced by running the `Network should recover from ip leaks`
- test creates 100 pods with the pause image.
- test restarts docker (`systemctl restart docker`) 6 times, with 20s interval in between.
- test completes successfully.
- Run `ps -C "pause" -f` and see multiple processes running the `pause` command still alive.
- Run `docker ps` and see no running container.
Running the test a few times (< 3) should reproduce the issue.
/cc @kubernetes/sig-node-bugs | True | Processes may be leaked when docker are killed repeatedly in a short time frame - Forked from #37580
If docker gets killed repeated in a short time frame (while kubelet is running and trying to create containers), some container processes may get reparented to PID 1 and continue running, but no longer visible from the docker daemon.
This can be produced by running the `Network should recover from ip leaks`
- test creates 100 pods with the pause image.
- test restarts docker (`systemctl restart docker`) 6 times, with 20s interval in between.
- test completes successfully.
- Run `ps -C "pause" -f` and see multiple processes running the `pause` command still alive.
- Run `docker ps` and see no running container.
Running the test a few times (< 3) should reproduce the issue.
/cc @kubernetes/sig-node-bugs | reli | processes may be leaked when docker are killed repeatedly in a short time frame forked from if docker gets killed repeated in a short time frame while kubelet is running and trying to create containers some container processes may get reparented to pid and continue running but no longer visible from the docker daemon this can be produced by running the network should recover from ip leaks test creates pods with the pause image test restarts docker systemctl restart docker times with interval in between test completes successfully run ps c pause f and see multiple processes running the pause command still alive run docker ps and see no running container running the test a few times should reproduce the issue cc kubernetes sig node bugs | 1 |
1,656 | 18,069,568,800 | IssuesEvent | 2021-09-21 00:09:12 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | [BUG] sending batch every minute but still gets error intermittently "The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'" | question Event Hubs Client customer-reported pillar-reliability needs-author-feedback | **Describe the bug**
I am sending a message batch to eventhub via EventHubProducerClient every Minute. But still intermittently the link tracker closes the link stating no activity since 30 minutes (Send or receive)
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'
***Exception or Stack Trace***
```
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose connectionId[ehub_dev], entityName[MM_9999990000007], condition[Error{condition=null, description='null', info=null}]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose closing a local session for connectionId[MM_9999990000007], entityName[ehub_dev], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007] sessionName[ehub_dev]: Error occurred. Removing and disposing session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorSession - connectionId[MM_9999990000007], sessionId[ehub_dev], errorCondition[n/a]: Disposing of session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkRemoteClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - processOnClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] WARN c.a.c.a.i.RequestResponseChannel - Retry #1. Transient error occurred. Retrying after 60000 ms.
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26, errorContext[NAMESPACE: ehub_nspace.servicebus.windows.net, PATH: $cbs, REFERENCE_ID: cbs:sender, LINK_CREDIT: 98]
2021-01-13 18:25:26 [single-1 ] ERROR c.a.c.a.i.RequestResponseChannel - cbs - Exception in RequestResponse links. Disposing and clearing unconfirmed sends.
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26, errorContext[NAMESPACE: ehub_nspace.servicebus.windows.net, PATH: $cbs, REFERENCE_ID: cbs:sender, LINK_CREDIT: 98]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkRemoteClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - processOnClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose connectionId[cbs-session], entityName[MM_9999990000007], condition[Error{condition=null, description='null', info=null}]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose closing a local session for connectionId[MM_9999990000007], entityName[cbs-session], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007] sessionName[cbs-session]: Error occurred. Removing and disposing session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorSession - connectionId[MM_9999990000007], sessionId[cbs-session], errorCondition[n/a]: Disposing of session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionRemoteClose hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkLocalClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkLocalClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionLocalClose hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionUnbound hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], state[CLOSED], remoteState[CLOSED]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[ehub_dev]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionFinal connectionId[MM_9999990000007], entityName[ehub_dev], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[cbs:sender]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[cbs:receiver]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionFinal connectionId[MM_9999990000007], entityName[cbs-session], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionFinal hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.m.e.i.EventHubConnectionProcessor - namespace[ehub_nspace.servicebus.windows.net] entityPath[ehub_dev]: Channel is closed.
2021-01-13 18:25:26 [single-1 ] INFO c.a.m.e.i.EventHubReactorAmqpConnection - connectionId[MM_9999990000007]: Disposing of connection.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007], errorCondition[n/a]: Disposing of ReactorConnection.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.AmqpExceptionHandler - Shutdown received: ReactorExecutor.close() was called., isTransient[false], initiatedByClient[true]
```
**Repro Steps**
Start a code which keeps sending a batch with dummy message to eventhub every minute.
**Expected behavior**
It should not close the connection by itself if a continuous sending of batch is being done by the process i.e. every 1 minute.
**Setup (please complete the following information):**
- OS: MAC
- IDE : IntelliJ
- Version of the Library used : azure-messaging-eventhub 5.3.1 and azure-eventhub 3.2.2
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [X] Bug Description Added
- [X] Repro Steps Added
- [X] Setup information Added
| True | [BUG] sending batch every minute but still gets error intermittently "The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'" - **Describe the bug**
I am sending a message batch to eventhub via EventHubProducerClient every Minute. But still intermittently the link tracker closes the link stating no activity since 30 minutes (Send or receive)
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'
***Exception or Stack Trace***
```
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose connectionId[ehub_dev], entityName[MM_9999990000007], condition[Error{condition=null, description='null', info=null}]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose closing a local session for connectionId[MM_9999990000007], entityName[ehub_dev], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007] sessionName[ehub_dev]: Error occurred. Removing and disposing session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorSession - connectionId[MM_9999990000007], sessionId[ehub_dev], errorCondition[n/a]: Disposing of session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkRemoteClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - processOnClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] WARN c.a.c.a.i.RequestResponseChannel - Retry #1. Transient error occurred. Retrying after 60000 ms.
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26, errorContext[NAMESPACE: ehub_nspace.servicebus.windows.net, PATH: $cbs, REFERENCE_ID: cbs:sender, LINK_CREDIT: 98]
2021-01-13 18:25:26 [single-1 ] ERROR c.a.c.a.i.RequestResponseChannel - cbs - Exception in RequestResponse links. Disposing and clearing unconfirmed sends.
The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26, errorContext[NAMESPACE: ehub_nspace.servicebus.windows.net, PATH: $cbs, REFERENCE_ID: cbs:sender, LINK_CREDIT: 98]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkRemoteClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - processOnClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose connectionId[cbs-session], entityName[MM_9999990000007], condition[Error{condition=null, description='null', info=null}]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionRemoteClose closing a local session for connectionId[MM_9999990000007], entityName[cbs-session], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007] sessionName[cbs-session]: Error occurred. Removing and disposing session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorSession - connectionId[MM_9999990000007], sessionId[cbs-session], errorCondition[n/a]: Disposing of session.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionRemoteClose hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkLocalClose connectionId[MM_9999990000007], linkName[cbs:sender], errorCondition[amqp:connection:forced], errorDescription[The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:9999b13b12345f12a12d, SystemTracker:gateway5, Timestamp:2021-01-13T12:55:26]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkLocalClose connectionId[MM_9999990000007], linkName[cbs:receiver], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionLocalClose hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionUnbound hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], state[CLOSED], remoteState[CLOSED]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[ehub_dev]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionFinal connectionId[MM_9999990000007], entityName[ehub_dev], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SendLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[cbs:sender]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ReceiveLinkHandler - onLinkFinal connectionId[MM_9999990000007], linkName[cbs:receiver]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.SessionHandler - onSessionFinal connectionId[MM_9999990000007], entityName[cbs-session], condition[null], description[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.handler.ConnectionHandler - onConnectionFinal hostname[dummy.servicebus.windows.net:443], connectionId[MM_9999990000007], errorCondition[null], errorDescription[null]
2021-01-13 18:25:26 [single-1 ] INFO c.a.m.e.i.EventHubConnectionProcessor - namespace[ehub_nspace.servicebus.windows.net] entityPath[ehub_dev]: Channel is closed.
2021-01-13 18:25:26 [single-1 ] INFO c.a.m.e.i.EventHubReactorAmqpConnection - connectionId[MM_9999990000007]: Disposing of connection.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.ReactorConnection - connectionId[MM_9999990000007], errorCondition[n/a]: Disposing of ReactorConnection.
2021-01-13 18:25:26 [single-1 ] INFO c.a.c.a.i.AmqpExceptionHandler - Shutdown received: ReactorExecutor.close() was called., isTransient[false], initiatedByClient[true]
```
**Repro Steps**
Start a code which keeps sending a batch with dummy message to eventhub every minute.
**Expected behavior**
It should not close the connection by itself if a continuous sending of batch is being done by the process i.e. every 1 minute.
**Setup (please complete the following information):**
- OS: MAC
- IDE : IntelliJ
- Version of the Library used : azure-messaging-eventhub 5.3.1 and azure-eventhub 3.2.2
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [X] Bug Description Added
- [X] Repro Steps Added
- [X] Setup information Added
| reli | sending batch every minute but still gets error intermittently the connection was inactive for more than the allowed milliseconds and is closed by container linktracker describe the bug i am sending a message batch to eventhub via eventhubproducerclient every minute but still intermittently the link tracker closes the link stating no activity since minutes send or receive the connection was inactive for more than the allowed milliseconds and is closed by container linktracker exception or stack trace info c a c a i handler sessionhandler onsessionremoteclose connectionid entityname condition info c a c a i handler sessionhandler onsessionremoteclose closing a local session for connectionid entityname condition description info c a c a i reactorconnection connectionid sessionname error occurred removing and disposing session info c a c a i reactorsession connectionid sessionid errorcondition disposing of session info c a c a i handler sendlinkhandler onlinkremoteclose connectionid linkname errorcondition errordescription info c a c a i handler sendlinkhandler processonclose connectionid linkname errorcondition errordescription warn c a c a i requestresponsechannel retry transient error occurred retrying after ms the connection was inactive for more than the allowed milliseconds and is closed by container linktracker trackingid systemtracker timestamp errorcontext error c a c a i requestresponsechannel cbs exception in requestresponse links disposing and clearing unconfirmed sends the connection was inactive for more than the allowed milliseconds and is closed by container linktracker trackingid systemtracker timestamp errorcontext info c a c a i handler receivelinkhandler onlinkremoteclose connectionid linkname errorcondition errordescription info c a c a i handler receivelinkhandler processonclose connectionid linkname errorcondition errordescription info c a c a i handler sessionhandler onsessionremoteclose connectionid entityname condition info c a c a i handler sessionhandler onsessionremoteclose closing a local session for connectionid entityname condition description info c a c a i reactorconnection connectionid sessionname error occurred removing and disposing session info c a c a i reactorsession connectionid sessionid errorcondition disposing of session info c a c a i handler connectionhandler onconnectionremoteclose hostname connectionid errorcondition errordescription info c a c a i handler sendlinkhandler onlinklocalclose connectionid linkname errorcondition errordescription info c a c a i handler receivelinkhandler onlinklocalclose connectionid linkname errorcondition errordescription info c a c a i handler connectionhandler onconnectionlocalclose hostname connectionid errorcondition errordescription info c a c a i handler connectionhandler onconnectionunbound hostname connectionid state remotestate info c a c a i handler sendlinkhandler onlinkfinal connectionid linkname info c a c a i handler sessionhandler onsessionfinal connectionid entityname condition description info c a c a i handler sendlinkhandler onlinkfinal connectionid linkname info c a c a i handler receivelinkhandler onlinkfinal connectionid linkname info c a c a i handler sessionhandler onsessionfinal connectionid entityname condition description info c a c a i handler connectionhandler onconnectionfinal hostname connectionid errorcondition errordescription info c a m e i eventhubconnectionprocessor namespace entitypath channel is closed info c a m e i eventhubreactoramqpconnection connectionid disposing of connection info c a c a i reactorconnection connectionid errorcondition disposing of reactorconnection info c a c a i amqpexceptionhandler shutdown received reactorexecutor close was called istransient initiatedbyclient repro steps start a code which keeps sending a batch with dummy message to eventhub every minute expected behavior it should not close the connection by itself if a continuous sending of batch is being done by the process i e every minute setup please complete the following information os mac ide intellij version of the library used azure messaging eventhub and azure eventhub additional context add any other context about the problem here information checklist kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report bug description added repro steps added setup information added | 1 |
281,302 | 30,888,594,313 | IssuesEvent | 2023-08-04 01:33:05 | Nivaskumark/kernel_v4.1.15 | https://api.github.com/repos/Nivaskumark/kernel_v4.1.15 | reopened | WS-2021-0545 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## WS-2021-0545 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
perf report: Fix memory leaks around perf_tip()
This is an automated ID intended to aid in discovery of potential security vulnerabilities. The actual impact and attack plausibility have not yet been proven.
This ID is fixed in Linux Kernel version v5.15.7 by commit 71e284dcebecb9fd204ff11097469cc547723ad1. For more details please see the references link.
<p>Publish Date: 2021-12-19
<p>URL: <a href=https://github.com/gregkh/linux/commit/71e284dcebecb9fd204ff11097469cc547723ad1>WS-2021-0545</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002560">https://osv.dev/vulnerability/GSD-2021-1002560</a></p>
<p>Release Date: 2021-12-19</p>
<p>Fix Resolution: v5.15.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0545 (Medium) detected in multiple libraries - ## WS-2021-0545 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b>, <b>linuxlinux-4.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
perf report: Fix memory leaks around perf_tip()
This is an automated ID intended to aid in discovery of potential security vulnerabilities. The actual impact and attack plausibility have not yet been proven.
This ID is fixed in Linux Kernel version v5.15.7 by commit 71e284dcebecb9fd204ff11097469cc547723ad1. For more details please see the references link.
<p>Publish Date: 2021-12-19
<p>URL: <a href=https://github.com/gregkh/linux/commit/71e284dcebecb9fd204ff11097469cc547723ad1>WS-2021-0545</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002560">https://osv.dev/vulnerability/GSD-2021-1002560</a></p>
<p>Release Date: 2021-12-19</p>
<p>Fix Resolution: v5.15.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | ws medium detected in multiple libraries ws medium severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux vulnerability details perf report fix memory leaks around perf tip this is an automated id intended to aid in discovery of potential security vulnerabilities the actual impact and attack plausibility have not yet been proven this id is fixed in linux kernel version by commit for more details please see the references link publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
636,331 | 20,597,584,276 | IssuesEvent | 2022-03-05 19:00:55 | grage03/prello | https://api.github.com/repos/grage03/prello | closed | Router | frontend low priority | It is necessary to add or correct the following points:
- [x] Going to another page should use the name, not the address
- [x] UILink | 1.0 | Router - It is necessary to add or correct the following points:
- [x] Going to another page should use the name, not the address
- [x] UILink | non_reli | router it is necessary to add or correct the following points going to another page should use the name not the address uilink | 0 |
2,038 | 22,798,371,313 | IssuesEvent | 2022-07-11 01:32:45 | StormSurgeLive/asgs | https://api.github.com/repos/StormSurgeLive/asgs | closed | Reduce chattiness and checking remote permissions in opendap_post2.sh | bug enhancement important non-critical opendap reliability | When posting to most remote servers (not fortytwo), the opendap_post2.sh script must check to see if it is creating directories, and if so, will most likely need to change permissions on those directories so that other Operators can also post there. However, it is often the case that it is not the entire directory hierarchy that can be changed (some directories are not owned by the Operator doing the posting). This causes the attempt to change permissions in those directories to fail and then retry (up to 10 times each time). This results in slow posting and huge log files that are very hard to parse or post.
On remote servers where permissions are an issue, the attempt to change permissions should be limited to cases where it is actually needed and will succeeed. | True | Reduce chattiness and checking remote permissions in opendap_post2.sh - When posting to most remote servers (not fortytwo), the opendap_post2.sh script must check to see if it is creating directories, and if so, will most likely need to change permissions on those directories so that other Operators can also post there. However, it is often the case that it is not the entire directory hierarchy that can be changed (some directories are not owned by the Operator doing the posting). This causes the attempt to change permissions in those directories to fail and then retry (up to 10 times each time). This results in slow posting and huge log files that are very hard to parse or post.
On remote servers where permissions are an issue, the attempt to change permissions should be limited to cases where it is actually needed and will succeeed. | reli | reduce chattiness and checking remote permissions in opendap sh when posting to most remote servers not fortytwo the opendap sh script must check to see if it is creating directories and if so will most likely need to change permissions on those directories so that other operators can also post there however it is often the case that it is not the entire directory hierarchy that can be changed some directories are not owned by the operator doing the posting this causes the attempt to change permissions in those directories to fail and then retry up to times each time this results in slow posting and huge log files that are very hard to parse or post on remote servers where permissions are an issue the attempt to change permissions should be limited to cases where it is actually needed and will succeeed | 1 |
244,109 | 20,610,380,628 | IssuesEvent | 2022-03-07 07:57:31 | hoppscotch/hoppscotch | https://api.github.com/repos/hoppscotch/hoppscotch | opened | [bug]: can not start via docker | bug need testing | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
ERR_PNPM_RECURSIVE_RUN_NO_SCRIPT None of the packages has a "do-prod-start" script
### Steps to reproduce
docker run --rm --name hoppscotch -p 3000:3000 hoppscotch/hoppscotch:latest
> hoppscotch-app@2.2.1 start /app
> pnpm -r do-prod-start
Scope: all 4 workspace projects
ERR_PNPM_RECURSIVE_RUN_NO_SCRIPT None of the packages has a "do-prod-start" script
### Environment
Release
### Version
Self-hosted | 1.0 | [bug]: can not start via docker - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
ERR_PNPM_RECURSIVE_RUN_NO_SCRIPT None of the packages has a "do-prod-start" script
### Steps to reproduce
docker run --rm --name hoppscotch -p 3000:3000 hoppscotch/hoppscotch:latest
> hoppscotch-app@2.2.1 start /app
> pnpm -r do-prod-start
Scope: all 4 workspace projects
ERR_PNPM_RECURSIVE_RUN_NO_SCRIPT None of the packages has a "do-prod-start" script
### Environment
Release
### Version
Self-hosted | non_reli | can not start via docker is there an existing issue for this i have searched the existing issues current behavior err pnpm recursive run no script none of the packages has a do prod start script steps to reproduce docker run rm name hoppscotch p hoppscotch hoppscotch latest hoppscotch app start app pnpm r do prod start scope all workspace projects err pnpm recursive run no script none of the packages has a do prod start script environment release version self hosted | 0 |
128,078 | 5,048,334,738 | IssuesEvent | 2016-12-20 12:33:38 | TASVideos/BizHawk | https://api.github.com/repos/TASVideos/BizHawk | closed | snes BSX support | Assigned-zeromus auto-migrated Core-BSNES Priority-Low Type-Enhancement | ```
i'm not sure BSX roms can be detected. if so, detect it.
we may need to put them in the gamedb.
otherwise, we need a special rom load option.
bsnes 0.87 requires specially selected load options.
```
Original issue reported on code.google.com by `zero...@zeromus.org` on 20 Apr 2014 at 11:15
| 1.0 | snes BSX support - ```
i'm not sure BSX roms can be detected. if so, detect it.
we may need to put them in the gamedb.
otherwise, we need a special rom load option.
bsnes 0.87 requires specially selected load options.
```
Original issue reported on code.google.com by `zero...@zeromus.org` on 20 Apr 2014 at 11:15
| non_reli | snes bsx support i m not sure bsx roms can be detected if so detect it we may need to put them in the gamedb otherwise we need a special rom load option bsnes requires specially selected load options original issue reported on code google com by zero zeromus org on apr at | 0 |
1,739 | 19,320,089,174 | IssuesEvent | 2021-12-14 03:50:11 | livepeer/go-livepeer | https://api.github.com/repos/livepeer/go-livepeer | closed | improve contextual logging | site reliability | It's very difficult these days to use go-livepeer logs to ascertain what's going wrong with a particular broadcast, as most logs.
I propose the following for our logs:
1. Any log that occurs within the context of a stream should have a `manifestId` listed. Ditto with `sessionId` for logging that occurs within the context.
2. Make sure all logging uses consistent formatting, e.g. `manifestId=ABC123` everywhere.
I'm not sure the best way to make this happen. Maybe we'd start using a context object in more places, so it can get passed down? It'd be good if it could make it all the way down to the LPMS code as well. | True | improve contextual logging - It's very difficult these days to use go-livepeer logs to ascertain what's going wrong with a particular broadcast, as most logs.
I propose the following for our logs:
1. Any log that occurs within the context of a stream should have a `manifestId` listed. Ditto with `sessionId` for logging that occurs within the context.
2. Make sure all logging uses consistent formatting, e.g. `manifestId=ABC123` everywhere.
I'm not sure the best way to make this happen. Maybe we'd start using a context object in more places, so it can get passed down? It'd be good if it could make it all the way down to the LPMS code as well. | reli | improve contextual logging it s very difficult these days to use go livepeer logs to ascertain what s going wrong with a particular broadcast as most logs i propose the following for our logs any log that occurs within the context of a stream should have a manifestid listed ditto with sessionid for logging that occurs within the context make sure all logging uses consistent formatting e g manifestid everywhere i m not sure the best way to make this happen maybe we d start using a context object in more places so it can get passed down it d be good if it could make it all the way down to the lpms code as well | 1 |
97,832 | 20,425,162,811 | IssuesEvent | 2022-02-24 02:29:03 | haproxy/haproxy | https://api.github.com/repos/haproxy/haproxy | opened | 16 new coverity findings | type: code-report | ### Tool Name and Version
coverity
### Code Report
```plain
** CID 1475448: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
________________________________________________________________________________________________________
*** CID 1475448: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
7197 /* we return a "res" object */
7198 lua_newtable(L);
7199
7200 lua_pushstring(L, "body");
7201 luaL_buffinit(L, &hlua_hc->b);
7202
>>> CID 1475448: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "hlua".
7203 task_wakeup(hlua->task, TASK_WOKEN_MSG);
7204 MAY_LJMP(hlua_yieldk(L, 0, 0, hlua_httpclient_rcv_yield, TICK_ETERNITY, 0));
7205
7206 return 1;
7207 }
7208
** CID 1475447: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475447: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6633 in hlua_http_msg_insert_data()
6627 if (offset < output || offset > output + input) {
6628 lua_pushfstring(L, "offset out of range.");
6629 WILL_LJMP(lua_error(L));
6630 }
6631 }
6632
>>> CID 1475447: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6633 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6634 lua_pushinteger(L, ret);
6635 return 1;
6636 }
6637
6638 /* Removes a given amount of data from the HTTP message at a given offset. By
** CID 1475446: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475446: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6557 in hlua_http_msg_append()
6551
6552 str = MAY_LJMP(luaL_checklstring(L, 2, &sz));
6553 filter = hlua_http_msg_filter(L, 1, msg, &offset, &len);
6554 if (!filter || !hlua_filter_from_payload(filter))
6555 WILL_LJMP(lua_error(L));
6556
>>> CID 1475446: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6557 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset+len);
6558 lua_pushinteger(L, ret);
6559 return 1;
6560 }
6561
6562 /* Prepends a string to the HTTP message, before all existing DATA blocks. It
** CID 1475445: Code maintainability issues (UNUSED_VALUE)
/src/connection.c: 1658 in list_mux_proto()
________________________________________________________________________________________________________
*** CID 1475445: Code maintainability issues (UNUSED_VALUE)
/src/connection.c: 1658 in list_mux_proto()
1652 done |= fprintf(out, "%sCLEAN_ABRT", done ? "|" : "");
1653
1654 if (item->mux->flags & MX_FL_HOL_RISK)
1655 done |= fprintf(out, "%sHOL_RISK", done ? "|" : "");
1656
1657 if (item->mux->flags & MX_FL_NO_UPG)
>>> CID 1475445: Code maintainability issues (UNUSED_VALUE)
>>> Assigning value from "done | fprintf(out, "%sNO_UPG", (done ? "|" : ""))" to "done" here, but that stored value is overwritten before it can be used.
1658 done |= fprintf(out, "%sNO_UPG", done ? "|" : "");
1659
1660 fprintf(out, "\n");
1661 }
1662 }
1663
** CID 1475444: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475444: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6755 in hlua_http_msg_set_data()
6749 set:
6750 /* Be sure we can copied the string once input data will be removed. */
6751 htx = htx_from_buf(&msg->chn->buf);
6752 if (sz > htx_free_data_space(htx) + len)
6753 lua_pushinteger(L, -1);
6754 else {
>>> CID 1475444: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_delete", which dereferences it.
6755 _hlua_http_msg_delete(msg, filter, offset, len);
6756 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6757 lua_pushinteger(L, ret);
6758 }
6759 return 1;
6760 }
** CID 1475443: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475443: Null pointer dereferences (FORWARD_NULL)
/src/htx.c: 747 in htx_xfer_blks()
741 if (unlikely(dstref)) {
742 /* Headers or trailers part was partially xferred, so rollback the copy
743 * by removing all block between <dstref> and <dstblk>, both included.
744 */
745 while (dstref && dstref != dstblk)
746 dstref = htx_remove_blk(dst, dstref);
>>> CID 1475443: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "dstblk" to "htx_remove_blk", which dereferences it.
747 htx_remove_blk(dst, dstblk);
748
749 /* <dst> HTX message is empty, it means the headers or trailers
750 * part is too big to be copied at once.
751 */
752 if (htx_is_empty(dst))
** CID 1475442: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475442: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
7197 /* we return a "res" object */
7198 lua_newtable(L);
7199
7200 lua_pushstring(L, "body");
7201 luaL_buffinit(L, &hlua_hc->b);
7202
>>> CID 1475442: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "hlua->task" to "_task_wakeup", which dereferences it.
7203 task_wakeup(hlua->task, TASK_WOKEN_MSG);
7204 MAY_LJMP(hlua_yieldk(L, 0, 0, hlua_httpclient_rcv_yield, TICK_ETERNITY, 0));
7205
7206 return 1;
7207 }
7208
** CID 1475441: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475441: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6800 in hlua_http_msg_send()
6794 htx = htx_from_buf(&msg->chn->buf);
6795 if (sz > htx_free_data_space(htx)) {
6796 lua_pushinteger(L, -1);
6797 return 1;
6798 }
6799
>>> CID 1475441: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6800 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6801 if (ret > 0) {
6802 struct hlua_flt_ctx *flt_ctx = filter->ctx;
6803
6804 FLT_OFF(filter, msg->chn) += ret;
6805 flt_ctx->cur_len[CHN_IDX(msg->chn)] -= ret;
** CID 1475440: Control flow issues (DEADCODE)
/src/connection.c: 1649 in list_mux_proto()
________________________________________________________________________________________________________
*** CID 1475440: Control flow issues (DEADCODE)
/src/connection.c: 1649 in list_mux_proto()
1643 done = 0;
1644
1645 /* note: the block below could be simplied using macros but for only
1646 * 4 flags it's not worth it.
1647 */
1648 if (item->mux->flags & MX_FL_HTX)
>>> CID 1475440: Control flow issues (DEADCODE)
>>> Execution cannot reach the expression ""|"" inside this statement: "done |= fprintf(out, "%sHTX...".
1649 done |= fprintf(out, "%sHTX", done ? "|" : "");
1650
1651 if (item->mux->flags & MX_FL_CLEAN_ABRT)
1652 done |= fprintf(out, "%sCLEAN_ABRT", done ? "|" : "");
1653
1654 if (item->mux->flags & MX_FL_HOL_RISK)
** CID 1475439: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475439: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6587 in hlua_http_msg_prepend()
6581
6582 str = MAY_LJMP(luaL_checklstring(L, 2, &sz));
6583 filter = hlua_http_msg_filter(L, 1, msg, &offset, &len);
6584 if (!filter || !hlua_filter_from_payload(filter))
6585 WILL_LJMP(lua_error(L));
6586
>>> CID 1475439: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6587 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6588 lua_pushinteger(L, ret);
6589 return 1;
6590 }
6591
6592 /* Inserts a string to the HTTP message at a given offset. By default the string
** CID 1475438: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6853 in hlua_http_msg_forward()
________________________________________________________________________________________________________
*** CID 1475438: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6853 in hlua_http_msg_forward()
6847
6848 ret = fwd;
6849 if (ret > len)
6850 ret = len;
6851
6852 if (ret) {
>>> CID 1475438: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "filter".
6853 struct hlua_flt_ctx *flt_ctx = filter->ctx;
6854
6855 FLT_OFF(filter, msg->chn) += ret;
6856 flt_ctx->cur_off[CHN_IDX(msg->chn)] += ret;
6857 flt_ctx->cur_len[CHN_IDX(msg->chn)] -= ret;
6858 }
** CID 1475437: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475437: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6688 in hlua_http_msg_del_data()
6682 if (len < 0 || offset + len > output + input) {
6683 lua_pushfstring(L, "length out of range.");
6684 WILL_LJMP(lua_error(L));
6685 }
6686 }
6687
>>> CID 1475437: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_delete", which dereferences it.
6688 _hlua_http_msg_delete(msg, filter, offset, len);
6689
6690 end:
6691 lua_pushinteger(L, len);
6692 return 1;
6693 }
** CID 1446549: Null pointer dereferences (FORWARD_NULL)
/src/backend.c: 1638 in connect_server()
________________________________________________________________________________________________________
*** CID 1446549: Null pointer dereferences (FORWARD_NULL)
/src/backend.c: 1638 in connect_server()
1632
1633 /* Currently there seems to be no known cases of xprt ready
1634 * without the mux installed here.
1635 */
1636 BUG_ON(!srv_conn->mux);
1637
>>> CID 1446549: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "srv_conn->mux".
1638 if (!(srv_conn->mux->ctl(srv_conn, MUX_STATUS, NULL) & MUX_STATUS_READY))
1639 s->flags |= SF_SRV_REUSED_ANTICIPATED;
1640 }
1641
1642 /* flag for logging source ip/port */
1643 if (strm_fe(s)->options2 & PR_O2_SRC_ADDR)
** CID 1445800: (FORWARD_NULL)
/src/mux_h2.c: 5364 in h2s_bck_make_req_headers()
/src/mux_h2.c: 5467 in h2s_bck_make_req_headers()
________________________________________________________________________________________________________
*** CID 1445800: (FORWARD_NULL)
/src/mux_h2.c: 5364 in h2s_bck_make_req_headers()
5358 /* Skip header if same name is used to add the server name */
5359 if ((h2c->flags & H2_CF_IS_BACK) && h2c->proxy->server_id_hdr_name &&
5360 isteq(list[hdr].n, ist2(h2c->proxy->server_id_hdr_name, h2c->proxy->server_id_hdr_len)))
5361 continue;
5362
5363 /* Convert connection: upgrade to Extended connect from rfc 8441 */
>>> CID 1445800: (FORWARD_NULL)
>>> Dereferencing null pointer "sl".
5364 if ((sl->flags & HTX_SL_F_CONN_UPG) && isteqi(list[hdr].n, ist("connection"))) {
5365 /* rfc 7230 #6.1 Connection = list of tokens */
5366 struct ist connection_ist = list[hdr].v;
5367 do {
5368 if (isteqi(iststop(connection_ist, ','),
5369 ist("upgrade"))) {
/src/mux_h2.c: 5467 in h2s_bck_make_req_headers()
5461 /* len: 0x000000 (fill later), type: 1(HEADERS), flags: ENDH=4 */
5462 memcpy(outbuf.area, "\x00\x00\x00\x01\x04", 5);
5463 write_n32(outbuf.area + 5, h2s->id); // 4 bytes
5464 outbuf.data = 9;
5465
5466 /* encode the method, which necessarily is the first one */
>>> CID 1445800: (FORWARD_NULL)
>>> Dereferencing null pointer "sl".
5467 if (!hpack_encode_method(&outbuf, sl->info.req.meth, meth)) {
5468 if (b_space_wraps(mbuf))
5469 goto realign_again;
5470 goto full;
5471 }
5472
** CID 1437640: (OVERRUN)
/src/hlua.c: 10191 in hlua_register_cli()
/src/hlua.c: 10195 in hlua_register_cli()
________________________________________________________________________________________________________
*** CID 1437640: (OVERRUN)
/src/hlua.c: 10191 in hlua_register_cli()
10185 memset(kw, 0, sizeof(kw));
10186 while (lua_next(L, 1) != 0) {
10187 if (index >= CLI_PREFIX_KW_NB)
10188 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table with a maximum of 5 entries"));
10189 if (lua_type(L, -1) != LUA_TSTRING)
10190 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table filled with strings"));
>>> CID 1437640: (OVERRUN)
>>> Overrunning array "kw" of 5 8-byte elements at element index 5 (byte offset 47) using index "index" (which evaluates to 5).
10191 kw[index] = lua_tostring(L, -1);
10192 if (index == 0)
10193 chunk_printf(trash, "%s", kw[index]);
10194 else
10195 chunk_appendf(trash, " %s", kw[index]);
10196 index++;
/src/hlua.c: 10195 in hlua_register_cli()
10189 if (lua_type(L, -1) != LUA_TSTRING)
10190 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table filled with strings"));
10191 kw[index] = lua_tostring(L, -1);
10192 if (index == 0)
10193 chunk_printf(trash, "%s", kw[index]);
10194 else
>>> CID 1437640: (OVERRUN)
>>> Overrunning array "kw" of 5 8-byte elements at element index 5 (byte offset 47) using index "index" (which evaluates to 5).
10195 chunk_appendf(trash, " %s", kw[index]);
10196 index++;
10197 lua_pop(L, 1);
10198 }
10199 cli_kw = cli_find_kw_exact((char **)kw);
10200 if (cli_kw != NULL) {
** CID 1299655: Insecure data handling (TAINTED_STRING)
________________________________________________________________________________________________________
*** CID 1299655: Insecure data handling (TAINTED_STRING)
/src/haproxy.c: 2954 in main()
2948 RUN_INITCALLS(STG_REGISTER);
2949
2950 /* now's time to initialize early boot variables */
2951 init_early(argc, argv);
2952
2953 /* handles argument parsing */
>>> CID 1299655: Insecure data handling (TAINTED_STRING)
>>> Passing tainted string "**argv" to "init_args", which cannot accept tainted data.
2954 init_args(argc, argv);
2955
2956 RUN_INITCALLS(STG_ALLOC);
2957 RUN_INITCALLS(STG_POOL);
2958 RUN_INITCALLS(STG_INIT);
2959
```
### Additional Information
_No response_
### Output of `haproxy -vv`
```plain
no
```
| 1.0 | 16 new coverity findings - ### Tool Name and Version
coverity
### Code Report
```plain
** CID 1475448: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
________________________________________________________________________________________________________
*** CID 1475448: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
7197 /* we return a "res" object */
7198 lua_newtable(L);
7199
7200 lua_pushstring(L, "body");
7201 luaL_buffinit(L, &hlua_hc->b);
7202
>>> CID 1475448: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "hlua".
7203 task_wakeup(hlua->task, TASK_WOKEN_MSG);
7204 MAY_LJMP(hlua_yieldk(L, 0, 0, hlua_httpclient_rcv_yield, TICK_ETERNITY, 0));
7205
7206 return 1;
7207 }
7208
** CID 1475447: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475447: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6633 in hlua_http_msg_insert_data()
6627 if (offset < output || offset > output + input) {
6628 lua_pushfstring(L, "offset out of range.");
6629 WILL_LJMP(lua_error(L));
6630 }
6631 }
6632
>>> CID 1475447: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6633 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6634 lua_pushinteger(L, ret);
6635 return 1;
6636 }
6637
6638 /* Removes a given amount of data from the HTTP message at a given offset. By
** CID 1475446: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475446: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6557 in hlua_http_msg_append()
6551
6552 str = MAY_LJMP(luaL_checklstring(L, 2, &sz));
6553 filter = hlua_http_msg_filter(L, 1, msg, &offset, &len);
6554 if (!filter || !hlua_filter_from_payload(filter))
6555 WILL_LJMP(lua_error(L));
6556
>>> CID 1475446: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6557 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset+len);
6558 lua_pushinteger(L, ret);
6559 return 1;
6560 }
6561
6562 /* Prepends a string to the HTTP message, before all existing DATA blocks. It
** CID 1475445: Code maintainability issues (UNUSED_VALUE)
/src/connection.c: 1658 in list_mux_proto()
________________________________________________________________________________________________________
*** CID 1475445: Code maintainability issues (UNUSED_VALUE)
/src/connection.c: 1658 in list_mux_proto()
1652 done |= fprintf(out, "%sCLEAN_ABRT", done ? "|" : "");
1653
1654 if (item->mux->flags & MX_FL_HOL_RISK)
1655 done |= fprintf(out, "%sHOL_RISK", done ? "|" : "");
1656
1657 if (item->mux->flags & MX_FL_NO_UPG)
>>> CID 1475445: Code maintainability issues (UNUSED_VALUE)
>>> Assigning value from "done | fprintf(out, "%sNO_UPG", (done ? "|" : ""))" to "done" here, but that stored value is overwritten before it can be used.
1658 done |= fprintf(out, "%sNO_UPG", done ? "|" : "");
1659
1660 fprintf(out, "\n");
1661 }
1662 }
1663
** CID 1475444: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475444: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6755 in hlua_http_msg_set_data()
6749 set:
6750 /* Be sure we can copied the string once input data will be removed. */
6751 htx = htx_from_buf(&msg->chn->buf);
6752 if (sz > htx_free_data_space(htx) + len)
6753 lua_pushinteger(L, -1);
6754 else {
>>> CID 1475444: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_delete", which dereferences it.
6755 _hlua_http_msg_delete(msg, filter, offset, len);
6756 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6757 lua_pushinteger(L, ret);
6758 }
6759 return 1;
6760 }
** CID 1475443: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475443: Null pointer dereferences (FORWARD_NULL)
/src/htx.c: 747 in htx_xfer_blks()
741 if (unlikely(dstref)) {
742 /* Headers or trailers part was partially xferred, so rollback the copy
743 * by removing all block between <dstref> and <dstblk>, both included.
744 */
745 while (dstref && dstref != dstblk)
746 dstref = htx_remove_blk(dst, dstref);
>>> CID 1475443: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "dstblk" to "htx_remove_blk", which dereferences it.
747 htx_remove_blk(dst, dstblk);
748
749 /* <dst> HTX message is empty, it means the headers or trailers
750 * part is too big to be copied at once.
751 */
752 if (htx_is_empty(dst))
** CID 1475442: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475442: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 7203 in hlua_httpclient_snd_yield()
7197 /* we return a "res" object */
7198 lua_newtable(L);
7199
7200 lua_pushstring(L, "body");
7201 luaL_buffinit(L, &hlua_hc->b);
7202
>>> CID 1475442: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "hlua->task" to "_task_wakeup", which dereferences it.
7203 task_wakeup(hlua->task, TASK_WOKEN_MSG);
7204 MAY_LJMP(hlua_yieldk(L, 0, 0, hlua_httpclient_rcv_yield, TICK_ETERNITY, 0));
7205
7206 return 1;
7207 }
7208
** CID 1475441: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475441: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6800 in hlua_http_msg_send()
6794 htx = htx_from_buf(&msg->chn->buf);
6795 if (sz > htx_free_data_space(htx)) {
6796 lua_pushinteger(L, -1);
6797 return 1;
6798 }
6799
>>> CID 1475441: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6800 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6801 if (ret > 0) {
6802 struct hlua_flt_ctx *flt_ctx = filter->ctx;
6803
6804 FLT_OFF(filter, msg->chn) += ret;
6805 flt_ctx->cur_len[CHN_IDX(msg->chn)] -= ret;
** CID 1475440: Control flow issues (DEADCODE)
/src/connection.c: 1649 in list_mux_proto()
________________________________________________________________________________________________________
*** CID 1475440: Control flow issues (DEADCODE)
/src/connection.c: 1649 in list_mux_proto()
1643 done = 0;
1644
1645 /* note: the block below could be simplied using macros but for only
1646 * 4 flags it's not worth it.
1647 */
1648 if (item->mux->flags & MX_FL_HTX)
>>> CID 1475440: Control flow issues (DEADCODE)
>>> Execution cannot reach the expression ""|"" inside this statement: "done |= fprintf(out, "%sHTX...".
1649 done |= fprintf(out, "%sHTX", done ? "|" : "");
1650
1651 if (item->mux->flags & MX_FL_CLEAN_ABRT)
1652 done |= fprintf(out, "%sCLEAN_ABRT", done ? "|" : "");
1653
1654 if (item->mux->flags & MX_FL_HOL_RISK)
** CID 1475439: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475439: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6587 in hlua_http_msg_prepend()
6581
6582 str = MAY_LJMP(luaL_checklstring(L, 2, &sz));
6583 filter = hlua_http_msg_filter(L, 1, msg, &offset, &len);
6584 if (!filter || !hlua_filter_from_payload(filter))
6585 WILL_LJMP(lua_error(L));
6586
>>> CID 1475439: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_insert", which dereferences it.
6587 ret = _hlua_http_msg_insert(msg, filter, ist2(str, sz), offset);
6588 lua_pushinteger(L, ret);
6589 return 1;
6590 }
6591
6592 /* Inserts a string to the HTTP message at a given offset. By default the string
** CID 1475438: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6853 in hlua_http_msg_forward()
________________________________________________________________________________________________________
*** CID 1475438: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6853 in hlua_http_msg_forward()
6847
6848 ret = fwd;
6849 if (ret > len)
6850 ret = len;
6851
6852 if (ret) {
>>> CID 1475438: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "filter".
6853 struct hlua_flt_ctx *flt_ctx = filter->ctx;
6854
6855 FLT_OFF(filter, msg->chn) += ret;
6856 flt_ctx->cur_off[CHN_IDX(msg->chn)] += ret;
6857 flt_ctx->cur_len[CHN_IDX(msg->chn)] -= ret;
6858 }
** CID 1475437: Null pointer dereferences (FORWARD_NULL)
________________________________________________________________________________________________________
*** CID 1475437: Null pointer dereferences (FORWARD_NULL)
/src/hlua.c: 6688 in hlua_http_msg_del_data()
6682 if (len < 0 || offset + len > output + input) {
6683 lua_pushfstring(L, "length out of range.");
6684 WILL_LJMP(lua_error(L));
6685 }
6686 }
6687
>>> CID 1475437: Null pointer dereferences (FORWARD_NULL)
>>> Passing null pointer "filter" to "_hlua_http_msg_delete", which dereferences it.
6688 _hlua_http_msg_delete(msg, filter, offset, len);
6689
6690 end:
6691 lua_pushinteger(L, len);
6692 return 1;
6693 }
** CID 1446549: Null pointer dereferences (FORWARD_NULL)
/src/backend.c: 1638 in connect_server()
________________________________________________________________________________________________________
*** CID 1446549: Null pointer dereferences (FORWARD_NULL)
/src/backend.c: 1638 in connect_server()
1632
1633 /* Currently there seems to be no known cases of xprt ready
1634 * without the mux installed here.
1635 */
1636 BUG_ON(!srv_conn->mux);
1637
>>> CID 1446549: Null pointer dereferences (FORWARD_NULL)
>>> Dereferencing null pointer "srv_conn->mux".
1638 if (!(srv_conn->mux->ctl(srv_conn, MUX_STATUS, NULL) & MUX_STATUS_READY))
1639 s->flags |= SF_SRV_REUSED_ANTICIPATED;
1640 }
1641
1642 /* flag for logging source ip/port */
1643 if (strm_fe(s)->options2 & PR_O2_SRC_ADDR)
** CID 1445800: (FORWARD_NULL)
/src/mux_h2.c: 5364 in h2s_bck_make_req_headers()
/src/mux_h2.c: 5467 in h2s_bck_make_req_headers()
________________________________________________________________________________________________________
*** CID 1445800: (FORWARD_NULL)
/src/mux_h2.c: 5364 in h2s_bck_make_req_headers()
5358 /* Skip header if same name is used to add the server name */
5359 if ((h2c->flags & H2_CF_IS_BACK) && h2c->proxy->server_id_hdr_name &&
5360 isteq(list[hdr].n, ist2(h2c->proxy->server_id_hdr_name, h2c->proxy->server_id_hdr_len)))
5361 continue;
5362
5363 /* Convert connection: upgrade to Extended connect from rfc 8441 */
>>> CID 1445800: (FORWARD_NULL)
>>> Dereferencing null pointer "sl".
5364 if ((sl->flags & HTX_SL_F_CONN_UPG) && isteqi(list[hdr].n, ist("connection"))) {
5365 /* rfc 7230 #6.1 Connection = list of tokens */
5366 struct ist connection_ist = list[hdr].v;
5367 do {
5368 if (isteqi(iststop(connection_ist, ','),
5369 ist("upgrade"))) {
/src/mux_h2.c: 5467 in h2s_bck_make_req_headers()
5461 /* len: 0x000000 (fill later), type: 1(HEADERS), flags: ENDH=4 */
5462 memcpy(outbuf.area, "\x00\x00\x00\x01\x04", 5);
5463 write_n32(outbuf.area + 5, h2s->id); // 4 bytes
5464 outbuf.data = 9;
5465
5466 /* encode the method, which necessarily is the first one */
>>> CID 1445800: (FORWARD_NULL)
>>> Dereferencing null pointer "sl".
5467 if (!hpack_encode_method(&outbuf, sl->info.req.meth, meth)) {
5468 if (b_space_wraps(mbuf))
5469 goto realign_again;
5470 goto full;
5471 }
5472
** CID 1437640: (OVERRUN)
/src/hlua.c: 10191 in hlua_register_cli()
/src/hlua.c: 10195 in hlua_register_cli()
________________________________________________________________________________________________________
*** CID 1437640: (OVERRUN)
/src/hlua.c: 10191 in hlua_register_cli()
10185 memset(kw, 0, sizeof(kw));
10186 while (lua_next(L, 1) != 0) {
10187 if (index >= CLI_PREFIX_KW_NB)
10188 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table with a maximum of 5 entries"));
10189 if (lua_type(L, -1) != LUA_TSTRING)
10190 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table filled with strings"));
>>> CID 1437640: (OVERRUN)
>>> Overrunning array "kw" of 5 8-byte elements at element index 5 (byte offset 47) using index "index" (which evaluates to 5).
10191 kw[index] = lua_tostring(L, -1);
10192 if (index == 0)
10193 chunk_printf(trash, "%s", kw[index]);
10194 else
10195 chunk_appendf(trash, " %s", kw[index]);
10196 index++;
/src/hlua.c: 10195 in hlua_register_cli()
10189 if (lua_type(L, -1) != LUA_TSTRING)
10190 WILL_LJMP(luaL_argerror(L, 1, "1st argument must be a table filled with strings"));
10191 kw[index] = lua_tostring(L, -1);
10192 if (index == 0)
10193 chunk_printf(trash, "%s", kw[index]);
10194 else
>>> CID 1437640: (OVERRUN)
>>> Overrunning array "kw" of 5 8-byte elements at element index 5 (byte offset 47) using index "index" (which evaluates to 5).
10195 chunk_appendf(trash, " %s", kw[index]);
10196 index++;
10197 lua_pop(L, 1);
10198 }
10199 cli_kw = cli_find_kw_exact((char **)kw);
10200 if (cli_kw != NULL) {
** CID 1299655: Insecure data handling (TAINTED_STRING)
________________________________________________________________________________________________________
*** CID 1299655: Insecure data handling (TAINTED_STRING)
/src/haproxy.c: 2954 in main()
2948 RUN_INITCALLS(STG_REGISTER);
2949
2950 /* now's time to initialize early boot variables */
2951 init_early(argc, argv);
2952
2953 /* handles argument parsing */
>>> CID 1299655: Insecure data handling (TAINTED_STRING)
>>> Passing tainted string "**argv" to "init_args", which cannot accept tainted data.
2954 init_args(argc, argv);
2955
2956 RUN_INITCALLS(STG_ALLOC);
2957 RUN_INITCALLS(STG_POOL);
2958 RUN_INITCALLS(STG_INIT);
2959
```
### Additional Information
_No response_
### Output of `haproxy -vv`
```plain
no
```
| non_reli | new coverity findings tool name and version coverity code report plain cid null pointer dereferences forward null src hlua c in hlua httpclient snd yield cid null pointer dereferences forward null src hlua c in hlua httpclient snd yield we return a res object lua newtable l lua pushstring l body lual buffinit l hlua hc b cid null pointer dereferences forward null dereferencing null pointer hlua task wakeup hlua task task woken msg may ljmp hlua yieldk l hlua httpclient rcv yield tick eternity return cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg insert data if offset output input lua pushfstring l offset out of range will ljmp lua error l cid null pointer dereferences forward null passing null pointer filter to hlua http msg insert which dereferences it ret hlua http msg insert msg filter str sz offset lua pushinteger l ret return removes a given amount of data from the http message at a given offset by cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg append str may ljmp lual checklstring l sz filter hlua http msg filter l msg offset len if filter hlua filter from payload filter will ljmp lua error l cid null pointer dereferences forward null passing null pointer filter to hlua http msg insert which dereferences it ret hlua http msg insert msg filter str sz offset len lua pushinteger l ret return prepends a string to the http message before all existing data blocks it cid code maintainability issues unused value src connection c in list mux proto cid code maintainability issues unused value src connection c in list mux proto done fprintf out sclean abrt done if item mux flags mx fl hol risk done fprintf out shol risk done if item mux flags mx fl no upg cid code maintainability issues unused value assigning value from done fprintf out sno upg done to done here but that stored value is overwritten before it can be used done fprintf out sno upg done fprintf out n cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg set data set be sure we can copied the string once input data will be removed htx htx from buf msg chn buf if sz htx free data space htx len lua pushinteger l else cid null pointer dereferences forward null passing null pointer filter to hlua http msg delete which dereferences it hlua http msg delete msg filter offset len ret hlua http msg insert msg filter str sz offset lua pushinteger l ret return cid null pointer dereferences forward null cid null pointer dereferences forward null src htx c in htx xfer blks if unlikely dstref headers or trailers part was partially xferred so rollback the copy by removing all block between and both included while dstref dstref dstblk dstref htx remove blk dst dstref cid null pointer dereferences forward null passing null pointer dstblk to htx remove blk which dereferences it htx remove blk dst dstblk htx message is empty it means the headers or trailers part is too big to be copied at once if htx is empty dst cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua httpclient snd yield we return a res object lua newtable l lua pushstring l body lual buffinit l hlua hc b cid null pointer dereferences forward null passing null pointer hlua task to task wakeup which dereferences it task wakeup hlua task task woken msg may ljmp hlua yieldk l hlua httpclient rcv yield tick eternity return cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg send htx htx from buf msg chn buf if sz htx free data space htx lua pushinteger l return cid null pointer dereferences forward null passing null pointer filter to hlua http msg insert which dereferences it ret hlua http msg insert msg filter str sz offset if ret struct hlua flt ctx flt ctx filter ctx flt off filter msg chn ret flt ctx cur len ret cid control flow issues deadcode src connection c in list mux proto cid control flow issues deadcode src connection c in list mux proto done note the block below could be simplied using macros but for only flags it s not worth it if item mux flags mx fl htx cid control flow issues deadcode execution cannot reach the expression inside this statement done fprintf out shtx done fprintf out shtx done if item mux flags mx fl clean abrt done fprintf out sclean abrt done if item mux flags mx fl hol risk cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg prepend str may ljmp lual checklstring l sz filter hlua http msg filter l msg offset len if filter hlua filter from payload filter will ljmp lua error l cid null pointer dereferences forward null passing null pointer filter to hlua http msg insert which dereferences it ret hlua http msg insert msg filter str sz offset lua pushinteger l ret return inserts a string to the http message at a given offset by default the string cid null pointer dereferences forward null src hlua c in hlua http msg forward cid null pointer dereferences forward null src hlua c in hlua http msg forward ret fwd if ret len ret len if ret cid null pointer dereferences forward null dereferencing null pointer filter struct hlua flt ctx flt ctx filter ctx flt off filter msg chn ret flt ctx cur off ret flt ctx cur len ret cid null pointer dereferences forward null cid null pointer dereferences forward null src hlua c in hlua http msg del data if len output input lua pushfstring l length out of range will ljmp lua error l cid null pointer dereferences forward null passing null pointer filter to hlua http msg delete which dereferences it hlua http msg delete msg filter offset len end lua pushinteger l len return cid null pointer dereferences forward null src backend c in connect server cid null pointer dereferences forward null src backend c in connect server currently there seems to be no known cases of xprt ready without the mux installed here bug on srv conn mux cid null pointer dereferences forward null dereferencing null pointer srv conn mux if srv conn mux ctl srv conn mux status null mux status ready s flags sf srv reused anticipated flag for logging source ip port if strm fe s pr src addr cid forward null src mux c in bck make req headers src mux c in bck make req headers cid forward null src mux c in bck make req headers skip header if same name is used to add the server name if flags cf is back proxy server id hdr name isteq list n proxy server id hdr name proxy server id hdr len continue convert connection upgrade to extended connect from rfc cid forward null dereferencing null pointer sl if sl flags htx sl f conn upg isteqi list n ist connection rfc connection list of tokens struct ist connection ist list v do if isteqi iststop connection ist ist upgrade src mux c in bck make req headers len fill later type headers flags endh memcpy outbuf area write outbuf area id bytes outbuf data encode the method which necessarily is the first one cid forward null dereferencing null pointer sl if hpack encode method outbuf sl info req meth meth if b space wraps mbuf goto realign again goto full cid overrun src hlua c in hlua register cli src hlua c in hlua register cli cid overrun src hlua c in hlua register cli memset kw sizeof kw while lua next l if index cli prefix kw nb will ljmp lual argerror l argument must be a table with a maximum of entries if lua type l lua tstring will ljmp lual argerror l argument must be a table filled with strings cid overrun overrunning array kw of byte elements at element index byte offset using index index which evaluates to kw lua tostring l if index chunk printf trash s kw else chunk appendf trash s kw index src hlua c in hlua register cli if lua type l lua tstring will ljmp lual argerror l argument must be a table filled with strings kw lua tostring l if index chunk printf trash s kw else cid overrun overrunning array kw of byte elements at element index byte offset using index index which evaluates to chunk appendf trash s kw index lua pop l cli kw cli find kw exact char kw if cli kw null cid insecure data handling tainted string cid insecure data handling tainted string src haproxy c in main run initcalls stg register now s time to initialize early boot variables init early argc argv handles argument parsing cid insecure data handling tainted string passing tainted string argv to init args which cannot accept tainted data init args argc argv run initcalls stg alloc run initcalls stg pool run initcalls stg init additional information no response output of haproxy vv plain no | 0 |
19,320 | 3,439,527,352 | IssuesEvent | 2015-12-14 10:02:53 | jgirald/ES2015C | https://api.github.com/repos/jgirald/ES2015C | closed | Grabació escena civilización Persa | Animation Character Design Documentation High Priority Persian Team A Video | ### Descripción
Grabar las tomas de video necesarias para componer un video de presentación de la civilización Persa
### Acceptance criteria
Tomas de video preparadas para su composición
### Esfuerzo estimado
4h
| 1.0 | Grabació escena civilización Persa - ### Descripción
Grabar las tomas de video necesarias para componer un video de presentación de la civilización Persa
### Acceptance criteria
Tomas de video preparadas para su composición
### Esfuerzo estimado
4h
| non_reli | grabació escena civilización persa descripción grabar las tomas de video necesarias para componer un video de presentación de la civilización persa acceptance criteria tomas de video preparadas para su composición esfuerzo estimado | 0 |
73,226 | 24,512,264,062 | IssuesEvent | 2022-10-10 23:14:33 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: stats: `_axis_nan_policy_factory` adds unnecessary markup to the function's docstring | defect scipy.stats | ### Describe your issue.
When viewing the docstring for `scipy.stats.gmean` in ipython (e.g. `In [27]: gmean?`), the `See Also` section shows up as:
```
See Also
--------
:func:`numpy.mean`
Arithmetic average
:func:`numpy.average`
Weighted average
:func:`hmean`
Harmonic mean
```
In the source code, it is
```
See Also
--------
numpy.mean : Arithmetic average
numpy.average : Weighted average
hmean : Harmonic mean
```
and that is what I expect to see in ipython.
It looks like this happens for any function decorated with `_axis_nan_policy_factory` that has a `See Also` section, e.g. `hmean`, `kruskal`, `kstat`, `moment`, etc.
The decorator shouldn't be adding the `:func:` markup and the extra blank line to the docstring.
### Reproducing Code Example
```python
n/a
```
### Error message
```shell
n/a
```
### SciPy/NumPy/Python version information
1.10.0.dev0+2029.9b03471 1.24.0.dev0+901.g384c13e3f sys.version_info(major=3, minor=10, micro=1, releaselevel='final', serial=0) | 1.0 | BUG: stats: `_axis_nan_policy_factory` adds unnecessary markup to the function's docstring - ### Describe your issue.
When viewing the docstring for `scipy.stats.gmean` in ipython (e.g. `In [27]: gmean?`), the `See Also` section shows up as:
```
See Also
--------
:func:`numpy.mean`
Arithmetic average
:func:`numpy.average`
Weighted average
:func:`hmean`
Harmonic mean
```
In the source code, it is
```
See Also
--------
numpy.mean : Arithmetic average
numpy.average : Weighted average
hmean : Harmonic mean
```
and that is what I expect to see in ipython.
It looks like this happens for any function decorated with `_axis_nan_policy_factory` that has a `See Also` section, e.g. `hmean`, `kruskal`, `kstat`, `moment`, etc.
The decorator shouldn't be adding the `:func:` markup and the extra blank line to the docstring.
### Reproducing Code Example
```python
n/a
```
### Error message
```shell
n/a
```
### SciPy/NumPy/Python version information
1.10.0.dev0+2029.9b03471 1.24.0.dev0+901.g384c13e3f sys.version_info(major=3, minor=10, micro=1, releaselevel='final', serial=0) | non_reli | bug stats axis nan policy factory adds unnecessary markup to the function s docstring describe your issue when viewing the docstring for scipy stats gmean in ipython e g in gmean the see also section shows up as see also func numpy mean arithmetic average func numpy average weighted average func hmean harmonic mean in the source code it is see also numpy mean arithmetic average numpy average weighted average hmean harmonic mean and that is what i expect to see in ipython it looks like this happens for any function decorated with axis nan policy factory that has a see also section e g hmean kruskal kstat moment etc the decorator shouldn t be adding the func markup and the extra blank line to the docstring reproducing code example python n a error message shell n a scipy numpy python version information sys version info major minor micro releaselevel final serial | 0 |
13,919 | 9,106,782,974 | IssuesEvent | 2019-02-21 01:21:07 | coreos/ignition | https://api.github.com/repos/coreos/ignition | closed | Ignition Logs Configuration to journalctl | area/security kind/friction | # Bug #
Ignition logs all parsed and fetched configuration to journalctl. This is a security risk for organizations which send all journalctl output to a central log storage. At the very least, using ignition_file for secure configurations (keys/secrets) must be warned against in the documentation.
## Operating System Version ##
CoreOS-stable-1967.5.0-hvm
## Ignition Version ##
0.28.0
## Environment ##
AWS/ap-south-1/c5.large ec2 instance
## Expected Behavior ##
Ignition should not log complete configuration to journalctl.
## Actual Behavior ##
Ignition logs complete configuration to journalctl.
The simple `journalctl --identifier=ignition --all` command mentioned in the documentation gives the following 2 traces:
https://github.com/coreos/ignition/blob/3c7dbd3888646ba49f318188b7bf41b532252144/internal/providers/util/config.go#L25
https://github.com/coreos/ignition/blob/aad24ad59393d49d1e7cdf6c4504a94615d9f0c3/internal/exec/engine.go#L264
They show up as the following:
```
Feb 13 09:08:55 localhost ignition[422]: parsing config: {
Feb 13 09:08:55 localhost ignition[422]: parsing config: {"ignition":{"config":{"replace":{"source":"s3://eco-example-config/config.json","verification":{"hash":"sha512-9ff7f8f0bc00d37f32e013c792c3411b18db3dc9333881003ecc0f307150301a188b8fc9b6bc1016e9498db2be57f679eaaab86080ce814a8ac336981dc2a76c"}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}
Feb 13 09:09:49 localhost ignition[472]: parsing config: {
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {"ignition":{"config":{"append":[{"source":"data:text/plain;charset=utf-8;base64,eyJpZ25pd>
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {"ignition":{"config":{},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"stora>
Feb 13 05:09:49 localhost ignition[417]: disks: op(1): [started] waiting for udev to settle
```
While both are marked as Debug, the default configuration on latest CoreOS (CoreOS-stable-1967.5.0-hvm (ami-09642e32f99945765)) seems to be logging this. | True | Ignition Logs Configuration to journalctl - # Bug #
Ignition logs all parsed and fetched configuration to journalctl. This is a security risk for organizations which send all journalctl output to a central log storage. At the very least, using ignition_file for secure configurations (keys/secrets) must be warned against in the documentation.
## Operating System Version ##
CoreOS-stable-1967.5.0-hvm
## Ignition Version ##
0.28.0
## Environment ##
AWS/ap-south-1/c5.large ec2 instance
## Expected Behavior ##
Ignition should not log complete configuration to journalctl.
## Actual Behavior ##
Ignition logs complete configuration to journalctl.
The simple `journalctl --identifier=ignition --all` command mentioned in the documentation gives the following 2 traces:
https://github.com/coreos/ignition/blob/3c7dbd3888646ba49f318188b7bf41b532252144/internal/providers/util/config.go#L25
https://github.com/coreos/ignition/blob/aad24ad59393d49d1e7cdf6c4504a94615d9f0c3/internal/exec/engine.go#L264
They show up as the following:
```
Feb 13 09:08:55 localhost ignition[422]: parsing config: {
Feb 13 09:08:55 localhost ignition[422]: parsing config: {"ignition":{"config":{"replace":{"source":"s3://eco-example-config/config.json","verification":{"hash":"sha512-9ff7f8f0bc00d37f32e013c792c3411b18db3dc9333881003ecc0f307150301a188b8fc9b6bc1016e9498db2be57f679eaaab86080ce814a8ac336981dc2a76c"}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}
Feb 13 09:09:49 localhost ignition[472]: parsing config: {
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {"ignition":{"config":{"append":[{"source":"data:text/plain;charset=utf-8;base64,eyJpZ25pd>
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {"ignition":{"config":{},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"stora>
Feb 13 05:09:49 localhost ignition[417]: disks: op(1): [started] waiting for udev to settle
```
While both are marked as Debug, the default configuration on latest CoreOS (CoreOS-stable-1967.5.0-hvm (ami-09642e32f99945765)) seems to be logging this. | non_reli | ignition logs configuration to journalctl bug ignition logs all parsed and fetched configuration to journalctl this is a security risk for organizations which send all journalctl output to a central log storage at the very least using ignition file for secure configurations keys secrets must be warned against in the documentation operating system version coreos stable hvm ignition version environment aws ap south large instance expected behavior ignition should not log complete configuration to journalctl actual behavior ignition logs complete configuration to journalctl the simple journalctl identifier ignition all command mentioned in the documentation gives the following traces they show up as the following feb localhost ignition parsing config feb localhost ignition parsing config ignition config replace source eco example config config json verification hash timeouts version networkd passwd storage systemd feb localhost ignition parsing config feb localhost ignition fetched referenced config ignition config append source data text plain charset utf feb localhost ignition fetched referenced config ignition config timeouts version networkd passwd stora feb localhost ignition disks op waiting for udev to settle while both are marked as debug the default configuration on latest coreos coreos stable hvm ami seems to be logging this | 0 |
538 | 8,432,632,329 | IssuesEvent | 2018-10-17 03:06:54 | dotnet/project-system | https://api.github.com/repos/dotnet/project-system | closed | NRE in TaskDelayScheduler | Bug Project-System-CPS Tenet-Reliability | Observed debugging 031b0db6d.

`cancel` is `true`, so line 131 must have run, and `PendingUpdateTokenSource` would not have been `null`.
To have `cts` equal `null` on line 135, `PendingUpdateTokenSource` must have been cleared between 131 and 133. Presumably a callback on cancellation did on the calling thread which would enter the lock.
A simple fix for the NRE would be to copy `PendingUpdateTokenSource` to a local earlier and re-use it for reads. However there's possibly a deeper issue here.
```
System.NullReferenceException
HResult=0x80004003
Message=Object reference not set to an instance of an object.
Source=Microsoft.VisualStudio.ProjectSystem.Managed
StackTrace:
at Microsoft.VisualStudio.Threading.Tasks.TaskDelayScheduler.ClearPendingUpdates(Boolean cancel) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed\Threading\Tasks\TaskDelayScheduler.cs:line 135
at Microsoft.VisualStudio.Threading.Tasks.TaskDelayScheduler.ScheduleAsyncTask(Func`2 asyncFunctionToCall) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed\Threading\Tasks\TaskDelayScheduler.cs:line 63
at Microsoft.VisualStudio.ProjectSystem.VS.NuGet.ProjectAssetFileWatcher.FilesChanged(UInt32 cChanges, String[] rgpszFile, UInt32[] rggrfChange) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed.VS\ProjectSystem\VS\NuGet\ProjectAssetFileWatcher.cs:line 306
at Microsoft.VisualStudio.Services.FileChangeSubscription.<>c__DisplayClass33_0.<NotifyCore>b__0()
``` | True | NRE in TaskDelayScheduler - Observed debugging 031b0db6d.

`cancel` is `true`, so line 131 must have run, and `PendingUpdateTokenSource` would not have been `null`.
To have `cts` equal `null` on line 135, `PendingUpdateTokenSource` must have been cleared between 131 and 133. Presumably a callback on cancellation did on the calling thread which would enter the lock.
A simple fix for the NRE would be to copy `PendingUpdateTokenSource` to a local earlier and re-use it for reads. However there's possibly a deeper issue here.
```
System.NullReferenceException
HResult=0x80004003
Message=Object reference not set to an instance of an object.
Source=Microsoft.VisualStudio.ProjectSystem.Managed
StackTrace:
at Microsoft.VisualStudio.Threading.Tasks.TaskDelayScheduler.ClearPendingUpdates(Boolean cancel) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed\Threading\Tasks\TaskDelayScheduler.cs:line 135
at Microsoft.VisualStudio.Threading.Tasks.TaskDelayScheduler.ScheduleAsyncTask(Func`2 asyncFunctionToCall) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed\Threading\Tasks\TaskDelayScheduler.cs:line 63
at Microsoft.VisualStudio.ProjectSystem.VS.NuGet.ProjectAssetFileWatcher.FilesChanged(UInt32 cChanges, String[] rgpszFile, UInt32[] rggrfChange) in D:\repos\project-system\src\Microsoft.VisualStudio.ProjectSystem.Managed.VS\ProjectSystem\VS\NuGet\ProjectAssetFileWatcher.cs:line 306
at Microsoft.VisualStudio.Services.FileChangeSubscription.<>c__DisplayClass33_0.<NotifyCore>b__0()
``` | reli | nre in taskdelayscheduler observed debugging cancel is true so line must have run and pendingupdatetokensource would not have been null to have cts equal null on line pendingupdatetokensource must have been cleared between and presumably a callback on cancellation did on the calling thread which would enter the lock a simple fix for the nre would be to copy pendingupdatetokensource to a local earlier and re use it for reads however there s possibly a deeper issue here system nullreferenceexception hresult message object reference not set to an instance of an object source microsoft visualstudio projectsystem managed stacktrace at microsoft visualstudio threading tasks taskdelayscheduler clearpendingupdates boolean cancel in d repos project system src microsoft visualstudio projectsystem managed threading tasks taskdelayscheduler cs line at microsoft visualstudio threading tasks taskdelayscheduler scheduleasynctask func asyncfunctiontocall in d repos project system src microsoft visualstudio projectsystem managed threading tasks taskdelayscheduler cs line at microsoft visualstudio projectsystem vs nuget projectassetfilewatcher fileschanged cchanges string rgpszfile rggrfchange in d repos project system src microsoft visualstudio projectsystem managed vs projectsystem vs nuget projectassetfilewatcher cs line at microsoft visualstudio services filechangesubscription c b | 1 |
2,200 | 24,121,809,908 | IssuesEvent | 2022-09-20 19:26:37 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | [BUG] Segfault when partitioning empty batch | bug reliability | **Describe the bug**
While trying one of the potential fixes for #3244 I encountered a surprising segfault in `GpuPartitioning` and cudf `Table.partition`. Here's the relevant details from the hs_err file:
```
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000
[...]
Stack: [0x00007fc47817c000,0x00007fc47827d000], sp=0x00007fc47827a200, free space=1016k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [cudfjni2336658180527413000.so+0x1910fd] Java_ai_rapids_cudf_Table_partition+0x17d
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j ai.rapids.cudf.Table.partition(JJI[I)[J+0
j ai.rapids.cudf.Table.partition(Lai/rapids/cudf/ColumnView;I)Lai/rapids/cudf/PartitionedTable;+23
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$6(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lai/rapids/cudf/ColumnVector;Lai/rapids/cudf/Table;)Lai/rapids/cudf/PartitionedTable;+6
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3671.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$5(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/ColumnVector;)Lai/rapids/cudf/PartitionedTable;+12
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3670.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$1(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Lai/rapids/cudf/PartitionedTable;+37
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3663.apply(Ljava/lang/Object;)Ljava/lang/Object;+8
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.partitionInternalAndClose(Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Lscala/Tuple2;+13
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$columnarEval$2(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/NvtxRange;)Lscala/Tuple2;+2
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3662.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$columnarEval$1(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/NvtxRange;)[Lscala/Tuple2;+27
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3661.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.columnarEval(Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Ljava/lang/Object;+21
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$.$anonfun$prepareBatchShuffleDependency$3(Lcom/nvidia/spark/rapids/GpuExpression;Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Ljava/lang/Object;+2
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$$Lambda$3609.apply(Ljava/lang/Object;)Ljava/lang/Object;+8
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$anon$1.partNextBatch()V+154
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$anon$1.hasNext()Z+21
j org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(Lscala/collection/Iterator;)V+44
j org.apache.spark.shuffle.ShuffleWriteProcessor.write(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/ShuffleDependency;JLorg/apache/spark/TaskContext;Lorg/apache/spark/Partition;)Lorg/apache/spark/scheduler/MapStatus;+46
j org.apache.spark.scheduler.ShuffleMapTask.runTask(Lorg/apache/spark/TaskContext;)Lorg/apache/spark/scheduler/MapStatus;+189
j org.apache.spark.scheduler.ShuffleMapTask.runTask(Lorg/apache/spark/TaskContext;)Ljava/lang/Object;+2
j org.apache.spark.scheduler.Task.run(JILorg/apache/spark/metrics/MetricsSystem;Lscala/collection/immutable/Map;)Ljava/lang/Object;+215
j org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Lorg/apache/spark/executor/Executor$TaskRunner;Lscala/runtime/BooleanRef;)Ljava/lang/Object;+32
j org.apache.spark.executor.Executor$TaskRunner$$Lambda$2449.apply()Ljava/lang/Object;+8
j org.apache.spark.util.Utils$.tryWithSafeFinally(Lscala/Function0;Lscala/Function0;)Ljava/lang/Object;+4
j org.apache.spark.executor.Executor$TaskRunner.run()V+421
j java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
j java.lang.Thread.run()V+11
```
**Steps/Code to reproduce bug**
Apply the following patch:
```
diff --git a/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala b/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
index 191750b4..7c00beec 100644
--- a/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
+++ b/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
@@ -489,8 +489,11 @@ abstract class GpuBroadcastNestedLoopJoinExecBase(
}
case LeftAnti =>
// degenerate case, no rows are returned.
- val childRDD = left.executeColumnar()
- new GpuCoalesceExec.EmptyRDDWithPartitions(sparkContext, childRDD.getNumPartitions)
+ import scala.collection.JavaConverters._
+ val batchAttrs = output.asJava
+ left.executeColumnar().mapPartitions { _ =>
+ Iterator.single(GpuColumnVector.emptyBatch(batchAttrs))
+ }
case _ =>
// Everything else is treated like an unconditional cross join
val buildSide = getGpuBuildSide
```
Then try to perform a `distinct` on a left anti join with no condition, e.g.:
```
scala> val df = spark.read.parquet("/tmp/df.parquet")
df: org.apache.spark.sql.DataFrame = [id: bigint, id2: bigint]
scala> df.show
+---+---+
| id|id2|
+---+---+
| 1| 2|
| 2| 4|
| 3| 6|
| 4| 8|
| 5| 10|
| 6| 12|
| 7| 14|
| 8| 16|
| 9| 18|
| 10| 20|
+---+---+
scala> val df2 = spark.read.parquet("/tmp/df2.parquet")
df2: org.apache.spark.sql.DataFrame = [id: bigint, id2: bigint]
scala> df2.show
+---+---+
| id|id2|
+---+---+
| 8| 10|
| 9| 11|
| 10| 12|
+---+---+
scala> df.join(df2, Seq(), "leftanti").distinct.collect
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f77d106d0fd, pid=1877478, tid=0x00007f77d09d8700
#
# JRE version: OpenJDK Runtime Environment (8.0_292-b10) (build 1.8.0_292-8u292-b10-0ubuntu1~20.04-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.292-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [cudfjni6692880887576357286.so+0x1910fd] Java_ai_rapids_cudf_Table_partition+0x17d
```
**Expected behavior**
The RAPIDS Accelerator should never segfault on a query.
| True | [BUG] Segfault when partitioning empty batch - **Describe the bug**
While trying one of the potential fixes for #3244 I encountered a surprising segfault in `GpuPartitioning` and cudf `Table.partition`. Here's the relevant details from the hs_err file:
```
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000
[...]
Stack: [0x00007fc47817c000,0x00007fc47827d000], sp=0x00007fc47827a200, free space=1016k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [cudfjni2336658180527413000.so+0x1910fd] Java_ai_rapids_cudf_Table_partition+0x17d
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j ai.rapids.cudf.Table.partition(JJI[I)[J+0
j ai.rapids.cudf.Table.partition(Lai/rapids/cudf/ColumnView;I)Lai/rapids/cudf/PartitionedTable;+23
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$6(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lai/rapids/cudf/ColumnVector;Lai/rapids/cudf/Table;)Lai/rapids/cudf/PartitionedTable;+6
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3671.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$5(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/ColumnVector;)Lai/rapids/cudf/PartitionedTable;+12
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3670.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$partitionInternalAndClose$1(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Lai/rapids/cudf/PartitionedTable;+37
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3663.apply(Ljava/lang/Object;)Ljava/lang/Object;+8
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.partitionInternalAndClose(Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Lscala/Tuple2;+13
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$columnarEval$2(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/NvtxRange;)Lscala/Tuple2;+2
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3662.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.$anonfun$columnarEval$1(Lcom/nvidia/spark/rapids/GpuHashPartitioning;Lorg/apache/spark/sql/vectorized/ColumnarBatch;Lai/rapids/cudf/NvtxRange;)[Lscala/Tuple2;+27
j com.nvidia.spark.rapids.GpuHashPartitioning$$Lambda$3661.apply(Ljava/lang/Object;)Ljava/lang/Object;+12
j com.nvidia.spark.rapids.Arm.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+2
j com.nvidia.spark.rapids.Arm.withResource$(Lcom/nvidia/spark/rapids/Arm;Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.withResource(Ljava/lang/AutoCloseable;Lscala/Function1;)Ljava/lang/Object;+3
j com.nvidia.spark.rapids.GpuHashPartitioning.columnarEval(Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Ljava/lang/Object;+21
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$.$anonfun$prepareBatchShuffleDependency$3(Lcom/nvidia/spark/rapids/GpuExpression;Lorg/apache/spark/sql/vectorized/ColumnarBatch;)Ljava/lang/Object;+2
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$$Lambda$3609.apply(Ljava/lang/Object;)Ljava/lang/Object;+8
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$anon$1.partNextBatch()V+154
j org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExec$$anon$1.hasNext()Z+21
j org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(Lscala/collection/Iterator;)V+44
j org.apache.spark.shuffle.ShuffleWriteProcessor.write(Lorg/apache/spark/rdd/RDD;Lorg/apache/spark/ShuffleDependency;JLorg/apache/spark/TaskContext;Lorg/apache/spark/Partition;)Lorg/apache/spark/scheduler/MapStatus;+46
j org.apache.spark.scheduler.ShuffleMapTask.runTask(Lorg/apache/spark/TaskContext;)Lorg/apache/spark/scheduler/MapStatus;+189
j org.apache.spark.scheduler.ShuffleMapTask.runTask(Lorg/apache/spark/TaskContext;)Ljava/lang/Object;+2
j org.apache.spark.scheduler.Task.run(JILorg/apache/spark/metrics/MetricsSystem;Lscala/collection/immutable/Map;)Ljava/lang/Object;+215
j org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Lorg/apache/spark/executor/Executor$TaskRunner;Lscala/runtime/BooleanRef;)Ljava/lang/Object;+32
j org.apache.spark.executor.Executor$TaskRunner$$Lambda$2449.apply()Ljava/lang/Object;+8
j org.apache.spark.util.Utils$.tryWithSafeFinally(Lscala/Function0;Lscala/Function0;)Ljava/lang/Object;+4
j org.apache.spark.executor.Executor$TaskRunner.run()V+421
j java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
j java.lang.Thread.run()V+11
```
**Steps/Code to reproduce bug**
Apply the following patch:
```
diff --git a/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala b/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
index 191750b4..7c00beec 100644
--- a/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
+++ b/sql-plugin/src/main/scala/org/apache/spark/sql/rapids/execution/GpuBroadcastNestedLoopJoinExec.scala
@@ -489,8 +489,11 @@ abstract class GpuBroadcastNestedLoopJoinExecBase(
}
case LeftAnti =>
// degenerate case, no rows are returned.
- val childRDD = left.executeColumnar()
- new GpuCoalesceExec.EmptyRDDWithPartitions(sparkContext, childRDD.getNumPartitions)
+ import scala.collection.JavaConverters._
+ val batchAttrs = output.asJava
+ left.executeColumnar().mapPartitions { _ =>
+ Iterator.single(GpuColumnVector.emptyBatch(batchAttrs))
+ }
case _ =>
// Everything else is treated like an unconditional cross join
val buildSide = getGpuBuildSide
```
Then try to perform a `distinct` on a left anti join with no condition, e.g.:
```
scala> val df = spark.read.parquet("/tmp/df.parquet")
df: org.apache.spark.sql.DataFrame = [id: bigint, id2: bigint]
scala> df.show
+---+---+
| id|id2|
+---+---+
| 1| 2|
| 2| 4|
| 3| 6|
| 4| 8|
| 5| 10|
| 6| 12|
| 7| 14|
| 8| 16|
| 9| 18|
| 10| 20|
+---+---+
scala> val df2 = spark.read.parquet("/tmp/df2.parquet")
df2: org.apache.spark.sql.DataFrame = [id: bigint, id2: bigint]
scala> df2.show
+---+---+
| id|id2|
+---+---+
| 8| 10|
| 9| 11|
| 10| 12|
+---+---+
scala> df.join(df2, Seq(), "leftanti").distinct.collect
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f77d106d0fd, pid=1877478, tid=0x00007f77d09d8700
#
# JRE version: OpenJDK Runtime Environment (8.0_292-b10) (build 1.8.0_292-8u292-b10-0ubuntu1~20.04-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.292-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [cudfjni6692880887576357286.so+0x1910fd] Java_ai_rapids_cudf_Table_partition+0x17d
```
**Expected behavior**
The RAPIDS Accelerator should never segfault on a query.
| reli | segfault when partitioning empty batch describe the bug while trying one of the potential fixes for i encountered a surprising segfault in gpupartitioning and cudf table partition here s the relevant details from the hs err file siginfo si signo sigsegv si code segv maperr si addr stack sp free space native frames j compiled java code j interpreted vv vm code c native code c java ai rapids cudf table partition java frames j compiled java code j interpreted vv vm code j ai rapids cudf table partition jji i j j ai rapids cudf table partition lai rapids cudf columnview i lai rapids cudf partitionedtable j com nvidia spark rapids gpuhashpartitioning anonfun partitioninternalandclose lcom nvidia spark rapids gpuhashpartitioning lai rapids cudf columnvector lai rapids cudf table lai rapids cudf partitionedtable j com nvidia spark rapids gpuhashpartitioning lambda apply ljava lang object ljava lang object j com nvidia spark rapids arm withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids arm withresource lcom nvidia spark rapids arm ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning anonfun partitioninternalandclose lcom nvidia spark rapids gpuhashpartitioning lorg apache spark sql vectorized columnarbatch lai rapids cudf columnvector lai rapids cudf partitionedtable j com nvidia spark rapids gpuhashpartitioning lambda apply ljava lang object ljava lang object j com nvidia spark rapids arm withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids arm withresource lcom nvidia spark rapids arm ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning anonfun partitioninternalandclose lcom nvidia spark rapids gpuhashpartitioning lorg apache spark sql vectorized columnarbatch lai rapids cudf partitionedtable j com nvidia spark rapids gpuhashpartitioning lambda apply ljava lang object ljava lang object j com nvidia spark rapids arm withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids arm withresource lcom nvidia spark rapids arm ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning partitioninternalandclose lorg apache spark sql vectorized columnarbatch lscala j com nvidia spark rapids gpuhashpartitioning anonfun columnareval lcom nvidia spark rapids gpuhashpartitioning lorg apache spark sql vectorized columnarbatch lai rapids cudf nvtxrange lscala j com nvidia spark rapids gpuhashpartitioning lambda apply ljava lang object ljava lang object j com nvidia spark rapids arm withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids arm withresource lcom nvidia spark rapids arm ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning anonfun columnareval lcom nvidia spark rapids gpuhashpartitioning lorg apache spark sql vectorized columnarbatch lai rapids cudf nvtxrange lscala j com nvidia spark rapids gpuhashpartitioning lambda apply ljava lang object ljava lang object j com nvidia spark rapids arm withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids arm withresource lcom nvidia spark rapids arm ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning withresource ljava lang autocloseable lscala ljava lang object j com nvidia spark rapids gpuhashpartitioning columnareval lorg apache spark sql vectorized columnarbatch ljava lang object j org apache spark sql rapids execution gpushuffleexchangeexec anonfun preparebatchshuffledependency lcom nvidia spark rapids gpuexpression lorg apache spark sql vectorized columnarbatch ljava lang object j org apache spark sql rapids execution gpushuffleexchangeexec lambda apply ljava lang object ljava lang object j org apache spark sql rapids execution gpushuffleexchangeexec anon partnextbatch v j org apache spark sql rapids execution gpushuffleexchangeexec anon hasnext z j org apache spark shuffle sort bypassmergesortshufflewriter write lscala collection iterator v j org apache spark shuffle shufflewriteprocessor write lorg apache spark rdd rdd lorg apache spark shuffledependency jlorg apache spark taskcontext lorg apache spark partition lorg apache spark scheduler mapstatus j org apache spark scheduler shufflemaptask runtask lorg apache spark taskcontext lorg apache spark scheduler mapstatus j org apache spark scheduler shufflemaptask runtask lorg apache spark taskcontext ljava lang object j org apache spark scheduler task run jilorg apache spark metrics metricssystem lscala collection immutable map ljava lang object j org apache spark executor executor taskrunner anonfun run lorg apache spark executor executor taskrunner lscala runtime booleanref ljava lang object j org apache spark executor executor taskrunner lambda apply ljava lang object j org apache spark util utils trywithsafefinally lscala lscala ljava lang object j org apache spark executor executor taskrunner run v j java util concurrent threadpoolexecutor runworker ljava util concurrent threadpoolexecutor worker v j java util concurrent threadpoolexecutor worker run v j java lang thread run v steps code to reproduce bug apply the following patch diff git a sql plugin src main scala org apache spark sql rapids execution gpubroadcastnestedloopjoinexec scala b sql plugin src main scala org apache spark sql rapids execution gpubroadcastnestedloopjoinexec scala index a sql plugin src main scala org apache spark sql rapids execution gpubroadcastnestedloopjoinexec scala b sql plugin src main scala org apache spark sql rapids execution gpubroadcastnestedloopjoinexec scala abstract class gpubroadcastnestedloopjoinexecbase case leftanti degenerate case no rows are returned val childrdd left executecolumnar new gpucoalesceexec emptyrddwithpartitions sparkcontext childrdd getnumpartitions import scala collection javaconverters val batchattrs output asjava left executecolumnar mappartitions iterator single gpucolumnvector emptybatch batchattrs case everything else is treated like an unconditional cross join val buildside getgpubuildside then try to perform a distinct on a left anti join with no condition e g scala val df spark read parquet tmp df parquet df org apache spark sql dataframe scala df show id scala val spark read parquet tmp parquet org apache spark sql dataframe scala show id scala df join seq leftanti distinct collect a fatal error has been detected by the java runtime environment sigsegv at pc pid tid jre version openjdk runtime environment build java vm openjdk bit server vm mixed mode linux compressed oops problematic frame c java ai rapids cudf table partition expected behavior the rapids accelerator should never segfault on a query | 1 |
1,094 | 13,063,362,096 | IssuesEvent | 2020-07-30 16:25:22 | FoundationDB/fdb-kubernetes-operator | https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator | closed | Cannot control fdbcli timeouts | reliability | In testing we managed to exhaust storage disk space in a cluster; this caused status json to end up taking 12 seconds, longer than the hardcoded 10 second timeout for admin commands.
I can understand not wanting a longer default timeout, but couldn't see anyway for user controlled timeouts: I think it would be nice if the schema for the DB permitted setting this, then we could at least override it when a given DB is giving grief. Most of the machinery to do so seems in place already. | True | Cannot control fdbcli timeouts - In testing we managed to exhaust storage disk space in a cluster; this caused status json to end up taking 12 seconds, longer than the hardcoded 10 second timeout for admin commands.
I can understand not wanting a longer default timeout, but couldn't see anyway for user controlled timeouts: I think it would be nice if the schema for the DB permitted setting this, then we could at least override it when a given DB is giving grief. Most of the machinery to do so seems in place already. | reli | cannot control fdbcli timeouts in testing we managed to exhaust storage disk space in a cluster this caused status json to end up taking seconds longer than the hardcoded second timeout for admin commands i can understand not wanting a longer default timeout but couldn t see anyway for user controlled timeouts i think it would be nice if the schema for the db permitted setting this then we could at least override it when a given db is giving grief most of the machinery to do so seems in place already | 1 |
1,173 | 13,534,304,268 | IssuesEvent | 2020-09-16 05:22:48 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | [BUG] Downloading of hdinsight-mgmt 1.3.5 via maven + ivy yields invalid zip | HDInsight Mgmt customer-reported tenet-reliability | **Describe the bug**
Hello! I am trying to use the Azure SDK for Java to use the hdinsight management library.
I specifically am looking to use version `1.3.5`, as it is the only version, as far as I can tell, with `com.microsoft.azure.management.hdinsight.v2018_06_01_preview.DiskEncryptionProperties`.
I download the sdk via maven + ivy, with the dependency specified as:
```xml
<ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
<info organisation="com.myorganization" module="application"/>
<dependencies defaultconf="default->default">
<!-- ... Other dependencies ....-->
<dependency org="com.microsoft.azure.hdinsight.v2018_06_01_preview" name="azure-mgmt-hdinsight" rev="1.3.5"/>
</dependencies>
</ivy-module>
```
When compiling code that references this dependency, I run into the following:
```shell
compile-base:
[echo] *** Compiling application ***
[javac] Compiling 178 source files to /path/to/application/build/classes
[javac] error: error reading /path/to/application/lib/azure-mgmt-hdinsight-1.3.5.jar; zip END header not found
BUILD FAILED
/path/to/application/build.xml:115: Compile failed; see the compiler error output for details.
```
Additionally, running `unzip` produces the following:
```shell
% unzip -t lib/azure-mgmt-hdinsight-1.3.5.jar
Archive: lib/azure-mgmt-hdinsight-1.3.5.jar
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of lib/azure-mgmt-hdinsight-1.3.5.jar or
lib/azure-mgmt-hdinsight-1.3.5.jar.zip, and cannot find lib/azure-mgmt-hdinsight-1.3.5.jar.ZIP, period.
%
```
I do not experience similar issues with 1.3.4, but it does not have the feature that I am looking for.
***Exception or Stack Trace***
Copied from description:
```
compile-base:
[echo] *** Compiling application ***
[javac] Compiling 178 source files to /path/to/application/build/classes
[javac] error: error reading /path/to/application/lib/azure-mgmt-hdinsight-1.3.5.jar; zip END header not found
BUILD FAILED
/path/to/application/build.xml:115: Compile failed; see the compiler error output for details.
```
**To Reproduce**
A minimal reproduction (see above for full context)
1. download the 1.3.5 JAR from https://mvnrepository.com/artifact/com.microsoft.azure.hdinsight.v2018_06_01_preview/azure-mgmt-hdinsight/1.3.5
1. run `unzip -t` on the download
1. see error from `unzip`
***Code Snippet***
ivy.xml file copied from above:
```xml
<ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
<info organisation="com.myorganization" module="application"/>
<dependencies defaultconf="default->default">
<!-- ... Other dependencies ....-->
<dependency org="com.microsoft.azure.hdinsight.v2018_06_01_preview" name="azure-mgmt-hdinsight" rev="1.3.5"/>
</dependencies>
</ivy-module>
```
**Expected behavior**
I expect a valid zip from the repository.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Setup (please complete the following information):**
- OS: [e.g. iOS] MacOSX Catalina (10.15.6)
- IDE : [e.g. IntelliJ] IntelliJ
- Version of the Library used: 1.3.5
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [x] Bug Description Added
- [x] Repro Steps Added
- [x] Setup information Added
| True | [BUG] Downloading of hdinsight-mgmt 1.3.5 via maven + ivy yields invalid zip - **Describe the bug**
Hello! I am trying to use the Azure SDK for Java to use the hdinsight management library.
I specifically am looking to use version `1.3.5`, as it is the only version, as far as I can tell, with `com.microsoft.azure.management.hdinsight.v2018_06_01_preview.DiskEncryptionProperties`.
I download the sdk via maven + ivy, with the dependency specified as:
```xml
<ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
<info organisation="com.myorganization" module="application"/>
<dependencies defaultconf="default->default">
<!-- ... Other dependencies ....-->
<dependency org="com.microsoft.azure.hdinsight.v2018_06_01_preview" name="azure-mgmt-hdinsight" rev="1.3.5"/>
</dependencies>
</ivy-module>
```
When compiling code that references this dependency, I run into the following:
```shell
compile-base:
[echo] *** Compiling application ***
[javac] Compiling 178 source files to /path/to/application/build/classes
[javac] error: error reading /path/to/application/lib/azure-mgmt-hdinsight-1.3.5.jar; zip END header not found
BUILD FAILED
/path/to/application/build.xml:115: Compile failed; see the compiler error output for details.
```
Additionally, running `unzip` produces the following:
```shell
% unzip -t lib/azure-mgmt-hdinsight-1.3.5.jar
Archive: lib/azure-mgmt-hdinsight-1.3.5.jar
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of lib/azure-mgmt-hdinsight-1.3.5.jar or
lib/azure-mgmt-hdinsight-1.3.5.jar.zip, and cannot find lib/azure-mgmt-hdinsight-1.3.5.jar.ZIP, period.
%
```
I do not experience similar issues with 1.3.4, but it does not have the feature that I am looking for.
***Exception or Stack Trace***
Copied from description:
```
compile-base:
[echo] *** Compiling application ***
[javac] Compiling 178 source files to /path/to/application/build/classes
[javac] error: error reading /path/to/application/lib/azure-mgmt-hdinsight-1.3.5.jar; zip END header not found
BUILD FAILED
/path/to/application/build.xml:115: Compile failed; see the compiler error output for details.
```
**To Reproduce**
A minimal reproduction (see above for full context)
1. download the 1.3.5 JAR from https://mvnrepository.com/artifact/com.microsoft.azure.hdinsight.v2018_06_01_preview/azure-mgmt-hdinsight/1.3.5
1. run `unzip -t` on the download
1. see error from `unzip`
***Code Snippet***
ivy.xml file copied from above:
```xml
<ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd">
<info organisation="com.myorganization" module="application"/>
<dependencies defaultconf="default->default">
<!-- ... Other dependencies ....-->
<dependency org="com.microsoft.azure.hdinsight.v2018_06_01_preview" name="azure-mgmt-hdinsight" rev="1.3.5"/>
</dependencies>
</ivy-module>
```
**Expected behavior**
I expect a valid zip from the repository.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Setup (please complete the following information):**
- OS: [e.g. iOS] MacOSX Catalina (10.15.6)
- IDE : [e.g. IntelliJ] IntelliJ
- Version of the Library used: 1.3.5
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [x] Bug Description Added
- [x] Repro Steps Added
- [x] Setup information Added
| reli | downloading of hdinsight mgmt via maven ivy yields invalid zip describe the bug hello i am trying to use the azure sdk for java to use the hdinsight management library i specifically am looking to use version as it is the only version as far as i can tell with com microsoft azure management hdinsight preview diskencryptionproperties i download the sdk via maven ivy with the dependency specified as xml ivy module version xmlns m xmlns xsi xsi nonamespaceschemalocation default when compiling code that references this dependency i run into the following shell compile base compiling application compiling source files to path to application build classes error error reading path to application lib azure mgmt hdinsight jar zip end header not found build failed path to application build xml compile failed see the compiler error output for details additionally running unzip produces the following shell unzip t lib azure mgmt hdinsight jar archive lib azure mgmt hdinsight jar end of central directory signature not found either this file is not a zipfile or it constitutes one disk of a multi part archive in the latter case the central directory and zipfile comment will be found on the last disk s of this archive unzip cannot find zipfile directory in one of lib azure mgmt hdinsight jar or lib azure mgmt hdinsight jar zip and cannot find lib azure mgmt hdinsight jar zip period i do not experience similar issues with but it does not have the feature that i am looking for exception or stack trace copied from description compile base compiling application compiling source files to path to application build classes error error reading path to application lib azure mgmt hdinsight jar zip end header not found build failed path to application build xml compile failed see the compiler error output for details to reproduce a minimal reproduction see above for full context download the jar from run unzip t on the download see error from unzip code snippet ivy xml file copied from above xml ivy module version xmlns m xmlns xsi xsi nonamespaceschemalocation default expected behavior i expect a valid zip from the repository screenshots if applicable add screenshots to help explain your problem setup please complete the following information os macosx catalina ide intellij version of the library used additional context add any other context about the problem here information checklist kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report bug description added repro steps added setup information added | 1 |
36,955 | 9,933,339,449 | IssuesEvent | 2019-07-02 12:06:42 | jupyterlab/jupyterlab | https://api.github.com/repos/jupyterlab/jupyterlab | closed | Enable Publish from Windows | tag:Build System tag:DevOps type:Maintenance | We should support publishing from a Windows machine. We ran into https://bugs.python.org/issue31226 when trying to publish 1.0.0.
For dev_mode, we should have a `clean:node` command that removes all node_modules. This should be called before trying to create the `sdist` if on Windows: https://github.com/jupyterlab/jupyterlab/blob/e2fd4c8841a7393f9131fb6b6e8252864a8bd351/buildutils/src/publish.ts#L53.
We can refactor the logic [here](https://github.com/jupyterlab/jupyterlab/blob/master/clean.py#L8) used for `clean:slate` into a node module that also calls `rmdir` in a child_process.
| 1.0 | Enable Publish from Windows - We should support publishing from a Windows machine. We ran into https://bugs.python.org/issue31226 when trying to publish 1.0.0.
For dev_mode, we should have a `clean:node` command that removes all node_modules. This should be called before trying to create the `sdist` if on Windows: https://github.com/jupyterlab/jupyterlab/blob/e2fd4c8841a7393f9131fb6b6e8252864a8bd351/buildutils/src/publish.ts#L53.
We can refactor the logic [here](https://github.com/jupyterlab/jupyterlab/blob/master/clean.py#L8) used for `clean:slate` into a node module that also calls `rmdir` in a child_process.
| non_reli | enable publish from windows we should support publishing from a windows machine we ran into when trying to publish for dev mode we should have a clean node command that removes all node modules this should be called before trying to create the sdist if on windows we can refactor the logic used for clean slate into a node module that also calls rmdir in a child process | 0 |
51,149 | 10,590,848,877 | IssuesEvent | 2019-10-09 09:36:23 | codemastermick/FrameTracker | https://api.github.com/repos/codemastermick/FrameTracker | closed | Fix "similar-code" issue in src/app/components/melee-summary/melee-summary.component.ts | codeclimate | Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/codemastermick/FrameTracker/src/app/components/melee-summary/melee-summary.component.ts#issue_5d99117006d4c10001000120 | 1.0 | Fix "similar-code" issue in src/app/components/melee-summary/melee-summary.component.ts - Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/codemastermick/FrameTracker/src/app/components/melee-summary/melee-summary.component.ts#issue_5d99117006d4c10001000120 | non_reli | fix similar code issue in src app components melee summary melee summary component ts similar blocks of code found in locations consider refactoring | 0 |
72,954 | 3,393,412,126 | IssuesEvent | 2015-12-01 00:06:52 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | ETCD Authentication model in Kubernetes | priority/P2 team/control-plane | Does Kubernetes support the Authentication model of ETCD to Connect the Components .
ETCD authentication module
https://github.com/coreos/etcd/blob/master/Documentation/authentication.md
| 1.0 | ETCD Authentication model in Kubernetes - Does Kubernetes support the Authentication model of ETCD to Connect the Components .
ETCD authentication module
https://github.com/coreos/etcd/blob/master/Documentation/authentication.md
| non_reli | etcd authentication model in kubernetes does kubernetes support the authentication model of etcd to connect the components etcd authentication module | 0 |
8,446 | 10,459,359,712 | IssuesEvent | 2019-09-20 10:44:26 | jiangdashao/Matrix-Issues | https://api.github.com/repos/jiangdashao/Matrix-Issues | opened | [INCOMPATIBILITY] ProtocolSupport | Incompatibility | ## Troubleshooting Information
`Change - [ ] to - [X] to check the checkboxes below.`
- [X] The incompatible plugin is up-to-date
- [X] Matrix and ProtocolLib are up-to-date
- [X] Matrix is running on a 1.8, 1.12, 1.13, or 1.14 server
- [X] The issue happens on default config.yml and checks.yml
- [X] I've tested if the issue happens on default config
## Issue Information
**Server version**: 1.12.2
**Incompatible plugin**: ProtocolSupport
**Verbose messages (or) console errors**: https://pastebin.com/QQ88GexR
**How/when does this happen**: when player hits NPC
**Video of incompatibility**:
**Other information**: you use outdated animation packet ID 1 in your plugin. In 1.11+ animation packet with ID 1 was removed, so you should use Entity Status packet (Living 2) for hurt animation.
https://wiki.vg/Protocol#Entity_Status
https://wiki.vg/Entity_statuses#Living
## Configuration Files
**Link to checks.yml file**: default
**Link to config.yml file**: default
| True | [INCOMPATIBILITY] ProtocolSupport - ## Troubleshooting Information
`Change - [ ] to - [X] to check the checkboxes below.`
- [X] The incompatible plugin is up-to-date
- [X] Matrix and ProtocolLib are up-to-date
- [X] Matrix is running on a 1.8, 1.12, 1.13, or 1.14 server
- [X] The issue happens on default config.yml and checks.yml
- [X] I've tested if the issue happens on default config
## Issue Information
**Server version**: 1.12.2
**Incompatible plugin**: ProtocolSupport
**Verbose messages (or) console errors**: https://pastebin.com/QQ88GexR
**How/when does this happen**: when player hits NPC
**Video of incompatibility**:
**Other information**: you use outdated animation packet ID 1 in your plugin. In 1.11+ animation packet with ID 1 was removed, so you should use Entity Status packet (Living 2) for hurt animation.
https://wiki.vg/Protocol#Entity_Status
https://wiki.vg/Entity_statuses#Living
## Configuration Files
**Link to checks.yml file**: default
**Link to config.yml file**: default
| non_reli | protocolsupport troubleshooting information change to to check the checkboxes below the incompatible plugin is up to date matrix and protocollib are up to date matrix is running on a or server the issue happens on default config yml and checks yml i ve tested if the issue happens on default config issue information server version incompatible plugin protocolsupport verbose messages or console errors how when does this happen when player hits npc video of incompatibility other information you use outdated animation packet id in your plugin in animation packet with id was removed so you should use entity status packet living for hurt animation configuration files link to checks yml file default link to config yml file default | 0 |
190,487 | 6,818,953,725 | IssuesEvent | 2017-11-07 08:26:03 | joshleeb/pylon | https://api.github.com/repos/joshleeb/pylon | closed | Compatibility Issues Across Platforms | HIGH PRIORITY investigation web | **Goal**
Improve the usability of PylonWeb across all platforms.
**Approach**
Explore compatibility issues that PylonWeb may be having across other platforms, especially on Mobile.
**Notes** | 1.0 | Compatibility Issues Across Platforms - **Goal**
Improve the usability of PylonWeb across all platforms.
**Approach**
Explore compatibility issues that PylonWeb may be having across other platforms, especially on Mobile.
**Notes** | non_reli | compatibility issues across platforms goal improve the usability of pylonweb across all platforms approach explore compatibility issues that pylonweb may be having across other platforms especially on mobile notes | 0 |
345,503 | 30,818,348,578 | IssuesEvent | 2023-08-01 14:46:07 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | Test test_change_reclaim_policy_of_pv failed | TestCase failing Squad/Green | Failed test cases:
tests/manage/pv_services/test_change_reclaim_policy_of_pv.py::TestChangeReclaimPolicyOfPv::test_change_reclaim_policy_of_pv[CephBlockPool-Delete]
tests/manage/pv_services/test_change_reclaim_policy_of_pv.py::TestChangeReclaimPolicyOfPv::test_change_reclaim_policy_of_pv[CephBlockPool-Retain]
Error:
```
Message: AssertionError: Volume associated with PVC pvc-test-124ba446267f401ebf2a77876362a97 still exists in backend
assert False
where False = verify_volume_deleted_in_backend(interface='CephBlockPool', image_uuid='4e35f980-ad4d-45e9-af36-c4bc18b3a6a5', pool_name='ocs-storagecluster-cephblockpool')
Type: None
```
RUN ID: 1684752447 | 1.0 | Test test_change_reclaim_policy_of_pv failed - Failed test cases:
tests/manage/pv_services/test_change_reclaim_policy_of_pv.py::TestChangeReclaimPolicyOfPv::test_change_reclaim_policy_of_pv[CephBlockPool-Delete]
tests/manage/pv_services/test_change_reclaim_policy_of_pv.py::TestChangeReclaimPolicyOfPv::test_change_reclaim_policy_of_pv[CephBlockPool-Retain]
Error:
```
Message: AssertionError: Volume associated with PVC pvc-test-124ba446267f401ebf2a77876362a97 still exists in backend
assert False
where False = verify_volume_deleted_in_backend(interface='CephBlockPool', image_uuid='4e35f980-ad4d-45e9-af36-c4bc18b3a6a5', pool_name='ocs-storagecluster-cephblockpool')
Type: None
```
RUN ID: 1684752447 | non_reli | test test change reclaim policy of pv failed failed test cases tests manage pv services test change reclaim policy of pv py testchangereclaimpolicyofpv test change reclaim policy of pv tests manage pv services test change reclaim policy of pv py testchangereclaimpolicyofpv test change reclaim policy of pv error message assertionerror volume associated with pvc pvc test still exists in backend assert false where false verify volume deleted in backend interface cephblockpool image uuid pool name ocs storagecluster cephblockpool type none run id | 0 |
2,339 | 24,796,126,225 | IssuesEvent | 2022-10-24 17:25:23 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Client crashed after editing qualifier result | area:tournament type:reliability | ### Type
Crash to desktop
### Bug description
The game will be crushed if you quit "edit seeding result" fast enough by opening an "incomplete" team.
The game version is a special version given on #20781, but I guess the latest stable lazer version have same error.
Steps to reproduce:
1. Edit `braket.json` under the tournament's file folder, and add an **incomplete** object to the `Teams` array, with only the necessary
attributes contained.
the following object is an example, as you can see, missing `BeatmapInfo` and other things not necessary :
```` json
{
"FullName": "Myon_",
"FlagName": "16626025",
"Acronym": "16626025",
"SeedingResults": [
{
"Beatmaps": [
{
"ID": 3831056,
"Score": 973381,
"Seed": 10
}
],
"Mod": "SV",
"Seed": 10
},
{
"Beatmaps": [
{
"ID": 3831047,
"Score": 958216,
"Seed": 60
},
{
"ID": 3831054,
"Score": 981045,
"Seed": 34
},
{
"ID": 3831046,
"Score": 952202,
"Seed": 60
}
],
"Mod": "RC",
"Seed": 48
},
{
"Beatmaps": [
{
"ID": 3831043,
"Score": 970222,
"Seed": 26
},
{
"ID": 3831045,
"Score": 971863,
"Seed": 21
}
],
"Mod": "LN",
"Seed": 23
},
{
"Beatmaps": [
{
"ID": 3831058,
"Score": 965141,
"Seed": 39
},
{
"ID": 3831057,
"Score": 942703,
"Seed": 48
}
],
"Mod": "HB",
"Seed": 45
}
],
"Seed": "28",
"LastYearPlacing": 1,
"AverageRank": 7264.0,
"Players": [
{
"id": 16626025,
"Username": "Myon_",
"country_code": "CN",
"Rank": 7264,
"CoverUrl": "https://assets.ppy.sh/user-profile-covers/16626025/f15da64b322b32fd5ca511c3291d9cabaa2ea256eadb850402040444fcb0fdae.jpeg"
}
]
},
````
2. open lazer tournament client, click "team Editor", select any team with **incomplete** information, and click "edit seeding results."
3. fast click "back" **before** information is downloaded and completed.
4. Error information rasing to the bottom and the game is crushed.
if you waited until the information was completely downloaded or opened a "finalized" team. the game won't crush. as the gif shows. click teams called "Riemann" won't be caused crushed but "Myon_" dose so. another experiment shows if waited long enough, "Myon_" is also not crushed.
### Screenshots or videos

### Version
Special Version (#20781 mentioned, next to 2022.1008.2)
### Logs
unnecessary logs will be omitted
````
...
2022-10-16 14:14:48 [error]: An unhandled error has occurred.
2022-10-16 14:14:48 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:48 [error]: Object name: 'Container'.
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:48 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:48 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:48 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:48 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:48 [verbose]: Unhandled exception has been allowed with 0 more allowable exceptions .
2022-10-16 14:14:49 [error]: An unhandled error has occurred.
2022-10-16 14:14:49 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:49 [error]: Object name: 'Container'.
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:49 [verbose]: Unhandled exception has been denied .
2022-10-16 14:14:49 [error]: An unhandled error has occurred.
2022-10-16 14:14:49 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:49 [error]: Object name: 'Container'.
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:49 [verbose]: Unhandled exception has been denied .
2022-10-16 14:14:54 [verbose]: Host execution state changed to Stopping
2022-10-16 14:14:59 [verbose]: Host execution state changed to Stopped
```` | True | Client crashed after editing qualifier result - ### Type
Crash to desktop
### Bug description
The game will be crushed if you quit "edit seeding result" fast enough by opening an "incomplete" team.
The game version is a special version given on #20781, but I guess the latest stable lazer version have same error.
Steps to reproduce:
1. Edit `braket.json` under the tournament's file folder, and add an **incomplete** object to the `Teams` array, with only the necessary
attributes contained.
the following object is an example, as you can see, missing `BeatmapInfo` and other things not necessary :
```` json
{
"FullName": "Myon_",
"FlagName": "16626025",
"Acronym": "16626025",
"SeedingResults": [
{
"Beatmaps": [
{
"ID": 3831056,
"Score": 973381,
"Seed": 10
}
],
"Mod": "SV",
"Seed": 10
},
{
"Beatmaps": [
{
"ID": 3831047,
"Score": 958216,
"Seed": 60
},
{
"ID": 3831054,
"Score": 981045,
"Seed": 34
},
{
"ID": 3831046,
"Score": 952202,
"Seed": 60
}
],
"Mod": "RC",
"Seed": 48
},
{
"Beatmaps": [
{
"ID": 3831043,
"Score": 970222,
"Seed": 26
},
{
"ID": 3831045,
"Score": 971863,
"Seed": 21
}
],
"Mod": "LN",
"Seed": 23
},
{
"Beatmaps": [
{
"ID": 3831058,
"Score": 965141,
"Seed": 39
},
{
"ID": 3831057,
"Score": 942703,
"Seed": 48
}
],
"Mod": "HB",
"Seed": 45
}
],
"Seed": "28",
"LastYearPlacing": 1,
"AverageRank": 7264.0,
"Players": [
{
"id": 16626025,
"Username": "Myon_",
"country_code": "CN",
"Rank": 7264,
"CoverUrl": "https://assets.ppy.sh/user-profile-covers/16626025/f15da64b322b32fd5ca511c3291d9cabaa2ea256eadb850402040444fcb0fdae.jpeg"
}
]
},
````
2. open lazer tournament client, click "team Editor", select any team with **incomplete** information, and click "edit seeding results."
3. fast click "back" **before** information is downloaded and completed.
4. Error information rasing to the bottom and the game is crushed.
if you waited until the information was completely downloaded or opened a "finalized" team. the game won't crush. as the gif shows. click teams called "Riemann" won't be caused crushed but "Myon_" dose so. another experiment shows if waited long enough, "Myon_" is also not crushed.
### Screenshots or videos

### Version
Special Version (#20781 mentioned, next to 2022.1008.2)
### Logs
unnecessary logs will be omitted
````
...
2022-10-16 14:14:48 [error]: An unhandled error has occurred.
2022-10-16 14:14:48 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:48 [error]: Object name: 'Container'.
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:48 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:48 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:48 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:48 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:48 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:48 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:48 [verbose]: Unhandled exception has been allowed with 0 more allowable exceptions .
2022-10-16 14:14:49 [error]: An unhandled error has occurred.
2022-10-16 14:14:49 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:49 [error]: Object name: 'Container'.
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:49 [verbose]: Unhandled exception has been denied .
2022-10-16 14:14:49 [error]: An unhandled error has occurred.
2022-10-16 14:14:49 [error]: System.ObjectDisposedException: Children cannot be cleared on a disposed drawable.
2022-10-16 14:14:49 [error]: Object name: 'Container'.
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.ClearInternal(Boolean disposeChildren)
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.updatePanel()
2022-10-16 14:14:49 [error]: at osu.Game.Tournament.Screens.Editors.SeedingEditorScreen.SeedingResultRow.SeedingBeatmapEditor.SeedingBeatmapRow.<load>b__12_2(APIBeatmap res)
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest`1.<.ctor>b__8_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 38
2022-10-16 14:14:49 [error]: at osu.Game.Online.API.APIRequest.<TriggerSuccess>b__24_0() in /Users/dean/Projects/osu/osu.Game/Online/API/APIRequest.cs:line 164
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.Scheduler.Update()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2022-10-16 14:14:49 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2022-10-16 14:14:49 [error]: at osu.Framework.Threading.GameThread.processFrame()
2022-10-16 14:14:49 [verbose]: Unhandled exception has been denied .
2022-10-16 14:14:54 [verbose]: Host execution state changed to Stopping
2022-10-16 14:14:59 [verbose]: Host execution state changed to Stopped
```` | reli | client crashed after editing qualifier result type crash to desktop bug description the game will be crushed if you quit edit seeding result fast enough by opening an incomplete team the game version is a special version given on but i guess the latest stable lazer version have same error steps to reproduce edit braket json under the tournament s file folder and add an incomplete object to the teams array with only the necessary attributes contained the following object is an example as you can see missing beatmapinfo and other things not necessary json fullname myon flagname acronym seedingresults beatmaps id score seed mod sv seed beatmaps id score seed id score seed id score seed mod rc seed beatmaps id score seed id score seed mod ln seed beatmaps id score seed id score seed mod hb seed seed lastyearplacing averagerank players id username myon country code cn rank coverurl open lazer tournament client click team editor select any team with incomplete information and click edit seeding results fast click back before information is downloaded and completed error information rasing to the bottom and the game is crushed if you waited until the information was completely downloaded or opened a finalized team the game won t crush as the gif shows click teams called riemann won t be caused crushed but myon dose so another experiment shows if waited long enough myon is also not crushed screenshots or videos version special version mentioned next to logs unnecessary logs will be omitted an unhandled error has occurred system objectdisposedexception children cannot be cleared on a disposed drawable object name container at osu framework graphics containers compositedrawable clearinternal boolean disposechildren at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow updatepanel at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow b apibeatmap res at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe unhandled exception has been allowed with more allowable exceptions an unhandled error has occurred system objectdisposedexception children cannot be cleared on a disposed drawable object name container at osu framework graphics containers compositedrawable clearinternal boolean disposechildren at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow updatepanel at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow b apibeatmap res at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe unhandled exception has been denied an unhandled error has occurred system objectdisposedexception children cannot be cleared on a disposed drawable object name container at osu framework graphics containers compositedrawable clearinternal boolean disposechildren at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow updatepanel at osu game tournament screens editors seedingeditorscreen seedingresultrow seedingbeatmapeditor seedingbeatmaprow b apibeatmap res at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu game online api apirequest b in users dean projects osu osu game online api apirequest cs line at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe unhandled exception has been denied host execution state changed to stopping host execution state changed to stopped | 1 |
692,583 | 23,741,266,255 | IssuesEvent | 2022-08-31 12:39:00 | StackExchange/dnscontrol | https://api.github.com/repos/StackExchange/dnscontrol | closed | Providers should lazy-authenticate | Type: Enhancement Priority: p4 - Lowest | Some providers don't authenticate until needed. Others authenticate on initialization (newPROVIDER()).
Neither is better. Users appreciate early authentication because it validates the credentials early, giving feedback right away if they have invalid creds. However by doing it that way, many providers authenticate once for each domain, which is slow and wasteful.
What would be better is if providers authenticate as late as possible, but we add some kind of "authcheck" subcommand that verifies that authentication is working for all providers in use. | 1.0 | Providers should lazy-authenticate - Some providers don't authenticate until needed. Others authenticate on initialization (newPROVIDER()).
Neither is better. Users appreciate early authentication because it validates the credentials early, giving feedback right away if they have invalid creds. However by doing it that way, many providers authenticate once for each domain, which is slow and wasteful.
What would be better is if providers authenticate as late as possible, but we add some kind of "authcheck" subcommand that verifies that authentication is working for all providers in use. | non_reli | providers should lazy authenticate some providers don t authenticate until needed others authenticate on initialization newprovider neither is better users appreciate early authentication because it validates the credentials early giving feedback right away if they have invalid creds however by doing it that way many providers authenticate once for each domain which is slow and wasteful what would be better is if providers authenticate as late as possible but we add some kind of authcheck subcommand that verifies that authentication is working for all providers in use | 0 |
1,360 | 3,918,683,899 | IssuesEvent | 2016-04-21 13:26:30 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | April release | category: release / binary P1 type: process | I'll try to create the candidate today, from the release candidate in Google:
mainline: 759bbfedbd8acd1324211d68b69e302478428e32
cherry-picks:
- 1250fdac4c7769cfa200af8b4f9b061024356fea
- ba8700ee63efe26c1a09d288129ced18a265ff89
- Rollback of https://bazel-review.googlesource.com/#/c/3220/ | 1.0 | April release - I'll try to create the candidate today, from the release candidate in Google:
mainline: 759bbfedbd8acd1324211d68b69e302478428e32
cherry-picks:
- 1250fdac4c7769cfa200af8b4f9b061024356fea
- ba8700ee63efe26c1a09d288129ced18a265ff89
- Rollback of https://bazel-review.googlesource.com/#/c/3220/ | non_reli | april release i ll try to create the candidate today from the release candidate in google mainline cherry picks rollback of | 0 |
69,615 | 13,300,087,903 | IssuesEvent | 2020-08-25 10:46:30 | jaa0124/iris_classifier | https://api.github.com/repos/jaa0124/iris_classifier | closed | Probar modelo VGG-16 | code | Este modelo viene incluído en la librería de Keras y se le proporcionará como input las muestras sin tratar.
[VGG-16](https://keras.io/api/applications/vgg/#vgg16-function) | 1.0 | Probar modelo VGG-16 - Este modelo viene incluído en la librería de Keras y se le proporcionará como input las muestras sin tratar.
[VGG-16](https://keras.io/api/applications/vgg/#vgg16-function) | non_reli | probar modelo vgg este modelo viene incluído en la librería de keras y se le proporcionará como input las muestras sin tratar | 0 |
2,212 | 24,188,478,980 | IssuesEvent | 2022-09-23 15:13:07 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Crash when creating multiplayer room | type:reliability | ### Type
Crash to desktop
### Bug description
When I clicked the Create button the game crashed to desktop leaving this in stdout:
```
$ ~/osu.AppImage
Unhandled exception. osu.Framework.Graphics.Drawable+InvalidThreadForMutationException: Cannot mutate the Transforms on a Loaded Drawable while not on the update thread. Consider using Schedule to schedule the mutation operation.
at osu.Framework.Graphics.Drawable.EnsureMutationAllowed(String action)
at osu.Framework.Graphics.Transforms.Transformable.AddTransform(Transform transform, Nullable`1 customTransformID)
at osu.Framework.Graphics.TransformableExtensions.TransformTo[TThis](TThis t, Transform transform)
at osu.Framework.Graphics.TransformableExtensions.FadeIn[T](T drawable, Double duration, Easing easing)
at osu.Game.Screens.OnlinePlay.Multiplayer.Match.MultiplayerMatchSettingsOverlay.MatchSettings.onError(String text) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Screens/OnlinePlay/Multiplayer/Match/MultiplayerMatchSettingsOverlay.cs:line 454
at osu.Game.Screens.OnlinePlay.Components.RoomManager.<>c__DisplayClass15_0.<CreateRoom>b__1(Exception exception) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs:line 65
at osu.Game.Online.API.APIRequest.TriggerFailure(Exception e) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIRequest.cs:line 181
at osu.Game.Online.API.APIRequest.Fail(Exception e) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIRequest.cs:line 216
at osu.Game.Online.API.APIAccess.flushQueue(Boolean failOldRequests) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 451
at osu.Game.Online.API.APIAccess.handleFailure() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 420
at osu.Game.Online.API.APIAccess.handleRequest(APIRequest req) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 377
at osu.Game.Online.API.APIAccess.processQueuedRequests() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 170
at osu.Game.Online.API.APIAccess.run() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 149
at System.Threading.Thread.StartHelper.Callback(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Thread.StartCallback()
```
After restarting the game it didn't happen again.
### Screenshots or videos
_No response_
### Version
2022.911.0
### Logs
[database.log](https://github.com/ppy/osu/files/9631383/database.log)
[network.log](https://github.com/ppy/osu/files/9631386/network.log)
[performance.log](https://github.com/ppy/osu/files/9631387/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9631388/runtime.log)
| True | Crash when creating multiplayer room - ### Type
Crash to desktop
### Bug description
When I clicked the Create button the game crashed to desktop leaving this in stdout:
```
$ ~/osu.AppImage
Unhandled exception. osu.Framework.Graphics.Drawable+InvalidThreadForMutationException: Cannot mutate the Transforms on a Loaded Drawable while not on the update thread. Consider using Schedule to schedule the mutation operation.
at osu.Framework.Graphics.Drawable.EnsureMutationAllowed(String action)
at osu.Framework.Graphics.Transforms.Transformable.AddTransform(Transform transform, Nullable`1 customTransformID)
at osu.Framework.Graphics.TransformableExtensions.TransformTo[TThis](TThis t, Transform transform)
at osu.Framework.Graphics.TransformableExtensions.FadeIn[T](T drawable, Double duration, Easing easing)
at osu.Game.Screens.OnlinePlay.Multiplayer.Match.MultiplayerMatchSettingsOverlay.MatchSettings.onError(String text) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Screens/OnlinePlay/Multiplayer/Match/MultiplayerMatchSettingsOverlay.cs:line 454
at osu.Game.Screens.OnlinePlay.Components.RoomManager.<>c__DisplayClass15_0.<CreateRoom>b__1(Exception exception) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs:line 65
at osu.Game.Online.API.APIRequest.TriggerFailure(Exception e) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIRequest.cs:line 181
at osu.Game.Online.API.APIRequest.Fail(Exception e) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIRequest.cs:line 216
at osu.Game.Online.API.APIAccess.flushQueue(Boolean failOldRequests) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 451
at osu.Game.Online.API.APIAccess.handleFailure() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 420
at osu.Game.Online.API.APIAccess.handleRequest(APIRequest req) in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 377
at osu.Game.Online.API.APIAccess.processQueuedRequests() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 170
at osu.Game.Online.API.APIAccess.run() in /var/lib/buildkite-agent/builds/debian-gnu-linux-vm-1/ppy/osu/osu.Game/Online/API/APIAccess.cs:line 149
at System.Threading.Thread.StartHelper.Callback(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Thread.StartCallback()
```
After restarting the game it didn't happen again.
### Screenshots or videos
_No response_
### Version
2022.911.0
### Logs
[database.log](https://github.com/ppy/osu/files/9631383/database.log)
[network.log](https://github.com/ppy/osu/files/9631386/network.log)
[performance.log](https://github.com/ppy/osu/files/9631387/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9631388/runtime.log)
| reli | crash when creating multiplayer room type crash to desktop bug description when i clicked the create button the game crashed to desktop leaving this in stdout osu appimage unhandled exception osu framework graphics drawable invalidthreadformutationexception cannot mutate the transforms on a loaded drawable while not on the update thread consider using schedule to schedule the mutation operation at osu framework graphics drawable ensuremutationallowed string action at osu framework graphics transforms transformable addtransform transform transform nullable customtransformid at osu framework graphics transformableextensions transformto tthis t transform transform at osu framework graphics transformableextensions fadein t drawable double duration easing easing at osu game screens onlineplay multiplayer match multiplayermatchsettingsoverlay matchsettings onerror string text in var lib buildkite agent builds debian gnu linux vm ppy osu osu game screens onlineplay multiplayer match multiplayermatchsettingsoverlay cs line at osu game screens onlineplay components roommanager c b exception exception in var lib buildkite agent builds debian gnu linux vm ppy osu osu game screens onlineplay components roommanager cs line at osu game online api apirequest triggerfailure exception e in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apirequest cs line at osu game online api apirequest fail exception e in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apirequest cs line at osu game online api apiaccess flushqueue boolean failoldrequests in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apiaccess cs line at osu game online api apiaccess handlefailure in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apiaccess cs line at osu game online api apiaccess handlerequest apirequest req in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apiaccess cs line at osu game online api apiaccess processqueuedrequests in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apiaccess cs line at osu game online api apiaccess run in var lib buildkite agent builds debian gnu linux vm ppy osu osu game online api apiaccess cs line at system threading thread starthelper callback object state at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state end of stack trace from previous location at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state at system threading thread startcallback after restarting the game it didn t happen again screenshots or videos no response version logs | 1 |
249,833 | 18,858,241,421 | IssuesEvent | 2021-11-12 09:32:35 | nvbinh15/pe | https://api.github.com/repos/nvbinh15/pe | opened | [DG] Opt block UML error | type.DocumentationBug severity.Low | Page 8 DG, the return arrow and the activation bar of `EmployeeUI` should be inside the `opt` block. Same issues in the other part of the same diagram (`opt` and `alt` blocks)


<!--session: 1636704852860-870ae238-0500-474a-99ef-69cbd59079c3-->
<!--Version: Web v3.4.1--> | 1.0 | [DG] Opt block UML error - Page 8 DG, the return arrow and the activation bar of `EmployeeUI` should be inside the `opt` block. Same issues in the other part of the same diagram (`opt` and `alt` blocks)


<!--session: 1636704852860-870ae238-0500-474a-99ef-69cbd59079c3-->
<!--Version: Web v3.4.1--> | non_reli | opt block uml error page dg the return arrow and the activation bar of employeeui should be inside the opt block same issues in the other part of the same diagram opt and alt blocks | 0 |
979 | 11,943,547,717 | IssuesEvent | 2020-04-02 23:36:05 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Ternary operator bool?someVoid():default gives "Failed to emit module". | Area-Compilers Bug Tenet-Reliability | **Version Used**:
DotNetCore, any version. Not sure about other frameworks.
**Steps to Reproduce**:
1. write code:
```
class Program
{
static void Main()
{
var someBool = new System.Random().Next(0, 2) == 1;
_ = someBool ? SomeVoid() : default;
}
public static void SomeVoid()
{
return;
}
}
```
2. Observe that, at least in Visual Studio, the code looks okay. I think Visual Studio uses Roslyn to analyze, so that's why I'm posting this issue here.
3. Try to build.
**Expected Behavior**:
Visual Studio should show that the code is unacceptable, as a value of type 'void' may not be assigned.
**Actual Behavior**:
Building fails with message "Failed to emit module '...'". | True | Ternary operator bool?someVoid():default gives "Failed to emit module". - **Version Used**:
DotNetCore, any version. Not sure about other frameworks.
**Steps to Reproduce**:
1. write code:
```
class Program
{
static void Main()
{
var someBool = new System.Random().Next(0, 2) == 1;
_ = someBool ? SomeVoid() : default;
}
public static void SomeVoid()
{
return;
}
}
```
2. Observe that, at least in Visual Studio, the code looks okay. I think Visual Studio uses Roslyn to analyze, so that's why I'm posting this issue here.
3. Try to build.
**Expected Behavior**:
Visual Studio should show that the code is unacceptable, as a value of type 'void' may not be assigned.
**Actual Behavior**:
Building fails with message "Failed to emit module '...'". | reli | ternary operator bool somevoid default gives failed to emit module version used dotnetcore any version not sure about other frameworks steps to reproduce write code class program static void main var somebool new system random next somebool somevoid default public static void somevoid return observe that at least in visual studio the code looks okay i think visual studio uses roslyn to analyze so that s why i m posting this issue here try to build expected behavior visual studio should show that the code is unacceptable as a value of type void may not be assigned actual behavior building fails with message failed to emit module | 1 |
290 | 6,023,244,843 | IssuesEvent | 2017-06-07 23:23:25 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | IDE stop responding completely while a big C# file was edited. | Area-IDE Tenet-Reliability | I edited a big C# file. IDE got slower and slower to respond. Eventually it stop responding at all, I had to kill the process. I captured a dump before killing the process, it is available upon request.
CC @Pilchie | True | IDE stop responding completely while a big C# file was edited. - I edited a big C# file. IDE got slower and slower to respond. Eventually it stop responding at all, I had to kill the process. I captured a dump before killing the process, it is available upon request.
CC @Pilchie | reli | ide stop responding completely while a big c file was edited i edited a big c file ide got slower and slower to respond eventually it stop responding at all i had to kill the process i captured a dump before killing the process it is available upon request cc pilchie | 1 |
2,765 | 27,578,378,999 | IssuesEvent | 2023-03-08 14:36:52 | cosmos/ibc-rs | https://api.github.com/repos/cosmos/ibc-rs | opened | [Transfer App] Missing token transfer APIs for `is_account_blocked` and `set_port` | A: bug A: breaking O: reliability | <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Problem Statment
Upon scanning `IBC-go`, I noticed that two essential APIs were missing from in our token transfer app.
- `set_port`: We have `get_port`, but there isn't any interface to set it.
- Note: we already set the port for `MockContext` by implementing `add_port` method on it!
- `is_account_blocked` : If the receiver's account is blocked or inactive (which is possible in the Tendermint chain), then we're not validating 'TransferMsg' properly.
| True | [Transfer App] Missing token transfer APIs for `is_account_blocked` and `set_port` - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Problem Statment
Upon scanning `IBC-go`, I noticed that two essential APIs were missing from in our token transfer app.
- `set_port`: We have `get_port`, but there isn't any interface to set it.
- Note: we already set the port for `MockContext` by implementing `add_port` method on it!
- `is_account_blocked` : If the receiver's account is blocked or inactive (which is possible in the Tendermint chain), then we're not validating 'TransferMsg' properly.
| reli | missing token transfer apis for is account blocked and set port ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v please also ensure that this is not a duplicate issue ☺ problem statment upon scanning ibc go i noticed that two essential apis were missing from in our token transfer app set port we have get port but there isn t any interface to set it note we already set the port for mockcontext by implementing add port method on it is account blocked if the receiver s account is blocked or inactive which is possible in the tendermint chain then we re not validating transfermsg properly | 1 |
884 | 11,424,905,668 | IssuesEvent | 2020-02-03 18:43:48 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Visual Studio crashes when adding a parameter to a function | Area-IDE Bug Developer Community IDE-IntelliSense Tenet-Reliability | _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/854342/visual-studio-crashes-when-adding-a-parameter-to-a.html)._
---
[regression] [worked-in:16.4]
I have a method with two overloads:
```
Private Function MonthToActualYearMonth(ByVal theMonth As MyEnumType) As Integer
Public Shared Function MonthToActualYearMonth(ByVal theYear As Integer, ByVal theMonth As MyEnumType) As Integer
```
And a line of code calling the wrong method:
`Dim aBla = MonthToActualYearMonth(myMonth)`
I get a squigly red line underneath the method call with the following error (which is correct):
`Cannot refer to an instance member of a class from within a shared method or shared member initializer without an explicit instance of the class.`
I want to add the missing parameter to use the shared method instead of the private function. I add my cursor after the first parentheses and start typing to add the missing parameter. When I finally type the "," Visual Studio crashes.
In the Event Viewer in Windows I find the following back:
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
at System.Collections.Generic.List`1.get_Item(Int32 index)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.Filter(IList`1 items, IEnumerable`1 parameterNames, Nullable`1 selectedItem)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.CreateSignatureHelpItems(IList`1 items, TextSpan applicableSpan, SignatureHelpState state, Nullable`1 selectedItem)
at Microsoft.CodeAnalysis.VisualBasic.SignatureHelp.InvocationExpressionSignatureHelpProvider.VB$StateMachine_16_GetItemsWorkerAsync.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.<GetItemsAsync>d__16.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller.Session.<ComputeItemsAsync>d__9.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.ReportUnlessCanceled(System.Exception)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.ValueTuple`2[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]], mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Start[[Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9, Microsoft.CodeAnalysis.EditorFeatures, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeItemsAsync>d__9 ByRef)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session.ComputeItemsAsync(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.SignatureHelp.ISignatureHelpProvider>, Microsoft.VisualStudio.Text.SnapshotPoint, Microsoft.CodeAnalysis.SignatureHelp.SignatureHelpTriggerInfo, Microsoft.CodeAnalysis.Document, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeModelInBackgroundAsync>d__4.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Start[[Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeModelInBackgroundAsync>d__4, Microsoft.CodeAnalysis.EditorFeatures, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeModelInBackgroundAsync>d__4 ByRef)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session.ComputeModelInBackgroundAsync(Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Model, System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.SignatureHelp.ISignatureHelpProvider>, Microsoft.VisualStudio.Text.SnapshotPoint, Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.DisconnectedBufferGraph, Microsoft.CodeAnalysis.SignatureHelp.SignatureHelpTriggerInfo, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<>c__DisplayClass3_0.<ComputeModel>b__0(Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Model, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.ModelComputation`1+<>c__DisplayClass17_0[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].<ChainTaskAndNotifyControllerWhenFinished>b__0(System.Threading.Tasks.Task`1<System.__Canon>)
at Roslyn.Utilities.TaskExtensions+<>c__DisplayClass15_0`2[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].<SafeContinueWithFromAsync>b__0(System.Threading.Tasks.Task)
at System.Threading.Tasks.ContinuationResultTaskFromTask`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
```
---
### Original Comments
#### Visual Studio Feedback System on 12/13/2019, 02:00 AM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
---
### Original Solutions
(no solutions) | True | Visual Studio crashes when adding a parameter to a function - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/854342/visual-studio-crashes-when-adding-a-parameter-to-a.html)._
---
[regression] [worked-in:16.4]
I have a method with two overloads:
```
Private Function MonthToActualYearMonth(ByVal theMonth As MyEnumType) As Integer
Public Shared Function MonthToActualYearMonth(ByVal theYear As Integer, ByVal theMonth As MyEnumType) As Integer
```
And a line of code calling the wrong method:
`Dim aBla = MonthToActualYearMonth(myMonth)`
I get a squigly red line underneath the method call with the following error (which is correct):
`Cannot refer to an instance member of a class from within a shared method or shared member initializer without an explicit instance of the class.`
I want to add the missing parameter to use the shared method instead of the private function. I add my cursor after the first parentheses and start typing to add the missing parameter. When I finally type the "," Visual Studio crashes.
In the Event Viewer in Windows I find the following back:
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
at System.Collections.Generic.List`1.get_Item(Int32 index)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.Filter(IList`1 items, IEnumerable`1 parameterNames, Nullable`1 selectedItem)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.CreateSignatureHelpItems(IList`1 items, TextSpan applicableSpan, SignatureHelpState state, Nullable`1 selectedItem)
at Microsoft.CodeAnalysis.VisualBasic.SignatureHelp.InvocationExpressionSignatureHelpProvider.VB$StateMachine_16_GetItemsWorkerAsync.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
at Microsoft.CodeAnalysis.SignatureHelp.AbstractSignatureHelpProvider.<GetItemsAsync>d__16.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller.Session.<ComputeItemsAsync>d__9.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.ReportUnlessCanceled(System.Exception)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.ValueTuple`2[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]], mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Start[[Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeItemsAsync>d__9, Microsoft.CodeAnalysis.EditorFeatures, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeItemsAsync>d__9 ByRef)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session.ComputeItemsAsync(System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.SignatureHelp.ISignatureHelpProvider>, Microsoft.VisualStudio.Text.SnapshotPoint, Microsoft.CodeAnalysis.SignatureHelp.SignatureHelpTriggerInfo, Microsoft.CodeAnalysis.Document, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeModelInBackgroundAsync>d__4.MoveNext()
at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Start[[Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<ComputeModelInBackgroundAsync>d__4, Microsoft.CodeAnalysis.EditorFeatures, Version=3.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]](<ComputeModelInBackgroundAsync>d__4 ByRef)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session.ComputeModelInBackgroundAsync(Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Model, System.Collections.Immutable.ImmutableArray`1<Microsoft.CodeAnalysis.SignatureHelp.ISignatureHelpProvider>, Microsoft.VisualStudio.Text.SnapshotPoint, Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.DisconnectedBufferGraph, Microsoft.CodeAnalysis.SignatureHelp.SignatureHelpTriggerInfo, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Controller+Session+<>c__DisplayClass3_0.<ComputeModel>b__0(Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.SignatureHelp.Model, System.Threading.CancellationToken)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.ModelComputation`1+<>c__DisplayClass17_0[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].<ChainTaskAndNotifyControllerWhenFinished>b__0(System.Threading.Tasks.Task`1<System.__Canon>)
at Roslyn.Utilities.TaskExtensions+<>c__DisplayClass15_0`2[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].<SafeContinueWithFromAsync>b__0(System.Threading.Tasks.Task)
at System.Threading.Tasks.ContinuationResultTaskFromTask`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
```
---
### Original Comments
#### Visual Studio Feedback System on 12/13/2019, 02:00 AM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
---
### Original Solutions
(no solutions) | reli | visual studio crashes when adding a parameter to a function this issue has been moved from i have a method with two overloads private function monthtoactualyearmonth byval themonth as myenumtype as integer public shared function monthtoactualyearmonth byval theyear as integer byval themonth as myenumtype as integer and a line of code calling the wrong method dim abla monthtoactualyearmonth mymonth i get a squigly red line underneath the method call with the following error which is correct cannot refer to an instance member of a class from within a shared method or shared member initializer without an explicit instance of the class i want to add the missing parameter to use the shared method instead of the private function i add my cursor after the first parentheses and start typing to add the missing parameter when i finally type the visual studio crashes in the event viewer in windows i find the following back application devenv exe framework version description the application requested process termination through system environment failfast string message message system argumentoutofrangeexception index was out of range must be non negative and less than the size of the collection parameter name index at system throwhelper throwargumentoutofrangeexception exceptionargument argument exceptionresource resource at system collections generic list get item index at microsoft codeanalysis signaturehelp abstractsignaturehelpprovider filter ilist items ienumerable parameternames nullable selecteditem at microsoft codeanalysis signaturehelp abstractsignaturehelpprovider createsignaturehelpitems ilist items textspan applicablespan signaturehelpstate state nullable selecteditem at microsoft codeanalysis visualbasic signaturehelp invocationexpressionsignaturehelpprovider vb statemachine getitemsworkerasync movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter validateend task task at microsoft codeanalysis signaturehelp abstractsignaturehelpprovider d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis editor implementation intellisense signaturehelp controller session d movenext stack at system environment failfast system string system exception at microsoft codeanalysis failfast onfatalexception system exception at microsoft codeanalysis errorreporting fatalerror report system exception system action at microsoft codeanalysis errorreporting fatalerror reportunlesscanceled system exception at microsoft codeanalysis editor implementation intellisense signaturehelp controller session d movenext at system runtime compilerservices taskawaiter throwfornonsuccess system threading tasks task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system threading tasks task at microsoft codeanalysis editor implementation intellisense signaturehelp controller session d movenext at system runtime compilerservices asynctaskmethodbuilder mscorlib version culture neutral publickeytoken start d byref at microsoft codeanalysis editor implementation intellisense signaturehelp controller session computeitemsasync system collections immutable immutablearray microsoft visualstudio text snapshotpoint microsoft codeanalysis signaturehelp signaturehelptriggerinfo microsoft codeanalysis document system threading cancellationtoken at microsoft codeanalysis editor implementation intellisense signaturehelp controller session d movenext at system runtime compilerservices asynctaskmethodbuilder start d byref at microsoft codeanalysis editor implementation intellisense signaturehelp controller session computemodelinbackgroundasync microsoft codeanalysis editor implementation intellisense signaturehelp model system collections immutable immutablearray microsoft visualstudio text snapshotpoint microsoft codeanalysis editor implementation intellisense disconnectedbuffergraph microsoft codeanalysis signaturehelp signaturehelptriggerinfo system threading cancellationtoken at microsoft codeanalysis editor implementation intellisense signaturehelp controller session c b microsoft codeanalysis editor implementation intellisense signaturehelp model system threading cancellationtoken at microsoft codeanalysis editor implementation intellisense modelcomputation c b system threading tasks task at roslyn utilities taskextensions c b system threading tasks task at system threading tasks continuationresulttaskfromtask innerinvoke at system threading tasks task execute at system threading tasks task executioncontextcallback system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks task system threading ithreadpoolworkitem executeworkitem at system threading threadpoolworkqueue dispatch at system threading threadpoolwaitcallback performwaitcallback original comments visual studio feedback system on am we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps original solutions no solutions | 1 |
56,789 | 8,125,078,493 | IssuesEvent | 2018-08-16 19:42:21 | att/ast | https://api.github.com/repos/att/ast | closed | ksh man page again - questionable doubled description -- | documentation | **Description of problem:**
There are 2 `set -B` description lines. I do not use brace expansion so I do not know which, if either, is the correct one.
**Ksh version:**
Version A 93v-1328-g2e6acdf2
**How reproducible:**
Always.
**Steps to reproduce:**
1. man ksh
2. Skip to the description of the `set` command.
3.
**Actual results:**
` set`
` ...`
` -B description 1`
` -B description 2`
**Expected results:**
Only 1 `-B`
**Additional info:**
Ain't got none (:-}). | 1.0 | ksh man page again - questionable doubled description -- - **Description of problem:**
There are 2 `set -B` description lines. I do not use brace expansion so I do not know which, if either, is the correct one.
**Ksh version:**
Version A 93v-1328-g2e6acdf2
**How reproducible:**
Always.
**Steps to reproduce:**
1. man ksh
2. Skip to the description of the `set` command.
3.
**Actual results:**
` set`
` ...`
` -B description 1`
` -B description 2`
**Expected results:**
Only 1 `-B`
**Additional info:**
Ain't got none (:-}). | non_reli | ksh man page again questionable doubled description description of problem there are set b description lines i do not use brace expansion so i do not know which if either is the correct one ksh version version a how reproducible always steps to reproduce man ksh skip to the description of the set command actual results set b description b description expected results only b additional info ain t got none | 0 |
771 | 10,476,292,521 | IssuesEvent | 2019-09-23 18:15:49 | microsoft/BotFramework-DirectLineJS | https://api.github.com/repos/microsoft/BotFramework-DirectLineJS | opened | Unhappy path: resume interruption, due to network error | 0 Reliability 0 Streaming Extensions | 1. Start a conversation
1. Fake a network interruption
1. Resume the network
Make sure the bot and client and continue to communicate both ways without significant delays. | True | Unhappy path: resume interruption, due to network error - 1. Start a conversation
1. Fake a network interruption
1. Resume the network
Make sure the bot and client and continue to communicate both ways without significant delays. | reli | unhappy path resume interruption due to network error start a conversation fake a network interruption resume the network make sure the bot and client and continue to communicate both ways without significant delays | 1 |
5,753 | 3,985,389,819 | IssuesEvent | 2016-05-07 21:03:23 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Revheads shouldn't instalose if they disconnect | Feature Request Not a bug Usability | >inb4 feedback forum
here is why:
1. only 3 heads of staff, so the game scales it down to there only being 1 revhead
2. said revhead's internet decides to shit itself 40 seconds into the round
3. the round instantly ends
4. the revhead reconnects and is met with anger/a ban | True | Revheads shouldn't instalose if they disconnect - >inb4 feedback forum
here is why:
1. only 3 heads of staff, so the game scales it down to there only being 1 revhead
2. said revhead's internet decides to shit itself 40 seconds into the round
3. the round instantly ends
4. the revhead reconnects and is met with anger/a ban | non_reli | revheads shouldn t instalose if they disconnect feedback forum here is why only heads of staff so the game scales it down to there only being revhead said revhead s internet decides to shit itself seconds into the round the round instantly ends the revhead reconnects and is met with anger a ban | 0 |
751 | 10,347,988,198 | IssuesEvent | 2019-09-04 18:42:07 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Could not load file or assembly ... The object already exists. (0x80071392) | area-AssemblyLoader bug reliability | We're running .NET Core 3.0.0-preview7-27912-14 in SQL Server test infra and we're seeing a puzzling assembly load error. I've never seen this before and neither has the internet it looks like from some quick searches.
It appears to be some sort of race in the assembly loader. We've had 92 hits over the past 20 days. In the same period we ran this command ~3 million times, so a hit rate of 0.003 %.
This is the error:
```
Unhandled exception. System.IO.FileLoadException: Could not load file or assembly 'System.Threading.Tasks, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The object already exists. (0x80071392)
File name: 'System.Threading.Tasks, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Vfs.Common.CommandLine.CommandParser.ExecuteAndHandle(String programName, String[] args, ICommand[] commands)
at Vfs.Driver.Program.Main(String[] args)
```
We're going to try updating to preview8 to see if that solves it and report back, but wanted to get this logged since I didn't see a fixed bug relating to this.
Given the low occurrence rate I don't think we can expect a repro, but if there are logging/tracing we can turn on to collect info, we can do so. | True | Could not load file or assembly ... The object already exists. (0x80071392) - We're running .NET Core 3.0.0-preview7-27912-14 in SQL Server test infra and we're seeing a puzzling assembly load error. I've never seen this before and neither has the internet it looks like from some quick searches.
It appears to be some sort of race in the assembly loader. We've had 92 hits over the past 20 days. In the same period we ran this command ~3 million times, so a hit rate of 0.003 %.
This is the error:
```
Unhandled exception. System.IO.FileLoadException: Could not load file or assembly 'System.Threading.Tasks, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The object already exists. (0x80071392)
File name: 'System.Threading.Tasks, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Vfs.Common.CommandLine.CommandParser.ExecuteAndHandle(String programName, String[] args, ICommand[] commands)
at Vfs.Driver.Program.Main(String[] args)
```
We're going to try updating to preview8 to see if that solves it and report back, but wanted to get this logged since I didn't see a fixed bug relating to this.
Given the low occurrence rate I don't think we can expect a repro, but if there are logging/tracing we can turn on to collect info, we can do so. | reli | could not load file or assembly the object already exists we re running net core in sql server test infra and we re seeing a puzzling assembly load error i ve never seen this before and neither has the internet it looks like from some quick searches it appears to be some sort of race in the assembly loader we ve had hits over the past days in the same period we ran this command million times so a hit rate of this is the error unhandled exception system io fileloadexception could not load file or assembly system threading tasks version culture neutral publickeytoken the object already exists file name system threading tasks version culture neutral publickeytoken at vfs common commandline commandparser executeandhandle string programname string args icommand commands at vfs driver program main string args we re going to try updating to to see if that solves it and report back but wanted to get this logged since i didn t see a fixed bug relating to this given the low occurrence rate i don t think we can expect a repro but if there are logging tracing we can turn on to collect info we can do so | 1 |
397 | 7,294,595,389 | IssuesEvent | 2018-02-26 01:09:03 | dotnet/project-system | https://api.github.com/repos/dotnet/project-system | reopened | Visual studio 15.6 preview 3 crashes on solution load | Area-New-Project-System Bug Feature - Up-to-date Resolution-No Repro Tenet-Reliability Urgency-Soon | 15.6 crashes on load of this solution
https://github.com/btrepp/vspreviewcrash
Repro steps
Clone repo
paket install
Open solution in 15.6 preview 3
Expected behavior
Solution loads like in 15.5.6
Actual behavior
A popup with "a non fatal error has occured" appears, but then VS locks up
Known workarounds
None
Related information
Operating system is windows 10, Machine had brand new windows installed and then Visual studio installed.
Original Issue: __https://github.com/Microsoft/visualfsharp/issues/4300__
I guess this is CPS or Project System, at least looking at the stacks involved.
| True | Visual studio 15.6 preview 3 crashes on solution load - 15.6 crashes on load of this solution
https://github.com/btrepp/vspreviewcrash
Repro steps
Clone repo
paket install
Open solution in 15.6 preview 3
Expected behavior
Solution loads like in 15.5.6
Actual behavior
A popup with "a non fatal error has occured" appears, but then VS locks up
Known workarounds
None
Related information
Operating system is windows 10, Machine had brand new windows installed and then Visual studio installed.
Original Issue: __https://github.com/Microsoft/visualfsharp/issues/4300__
I guess this is CPS or Project System, at least looking at the stacks involved.
| reli | visual studio preview crashes on solution load crashes on load of this solution repro steps clone repo paket install open solution in preview expected behavior solution loads like in actual behavior a popup with a non fatal error has occured appears but then vs locks up known workarounds none related information operating system is windows machine had brand new windows installed and then visual studio installed original issue i guess this is cps or project system at least looking at the stacks involved | 1 |
586,461 | 17,578,155,596 | IssuesEvent | 2021-08-16 00:53:58 | pterodactyl/panel | https://api.github.com/repos/pterodactyl/panel | closed | Re-installing a server bypasses the "file_denylist" | bug high priority | ### Is there an existing issue for this?
- [X] I have searched the existing issues before opening this issue.
### Current Behavior
Files added in the denylist can be edited, deleted and renamed by the user after a server reinstall
### Expected Behavior
The files should not be able to be modified by anyone, even after a reinstall
### Steps to Reproduce
Step 1 Add some files to the `file_denylist` of the egg
Step 2 Create a server with that egg
Step 3 Re-install the server from settings section
Step 4 Try to edit the file added in the deny list
### Panel Version
1.5.1
### Wings Version
1.4.7
### Error Logs
```bash
https://ptero.co/fikuxavojo
http://bin.ptdl.co/8bcqt
```
| 1.0 | Re-installing a server bypasses the "file_denylist" - ### Is there an existing issue for this?
- [X] I have searched the existing issues before opening this issue.
### Current Behavior
Files added in the denylist can be edited, deleted and renamed by the user after a server reinstall
### Expected Behavior
The files should not be able to be modified by anyone, even after a reinstall
### Steps to Reproduce
Step 1 Add some files to the `file_denylist` of the egg
Step 2 Create a server with that egg
Step 3 Re-install the server from settings section
Step 4 Try to edit the file added in the deny list
### Panel Version
1.5.1
### Wings Version
1.4.7
### Error Logs
```bash
https://ptero.co/fikuxavojo
http://bin.ptdl.co/8bcqt
```
| non_reli | re installing a server bypasses the file denylist is there an existing issue for this i have searched the existing issues before opening this issue current behavior files added in the denylist can be edited deleted and renamed by the user after a server reinstall expected behavior the files should not be able to be modified by anyone even after a reinstall steps to reproduce step add some files to the file denylist of the egg step create a server with that egg step re install the server from settings section step try to edit the file added in the deny list panel version wings version error logs bash | 0 |
152,817 | 12,127,164,804 | IssuesEvent | 2020-04-22 18:15:16 | aces/Loris | https://api.github.com/repos/aces/Loris | closed | [MRI Violations] Modifications to mri_protocol table are not saved | 23.0.0-testing Bug | **Describe the bug**
I access the MRI violations module as a user that has the permission to edit the MRI protocol. I make a valid edit of the MRI protocol table and the system asks if I am sure I want to proceed. I confirm that I want to and my modification seems to be saved but if I access another page (say, Access Profile) and go back to the MRI protocol edit page afterwards I see that it was not.
I think this happens no matter what column of the `mri_protocol` table you try to edit. | 1.0 | [MRI Violations] Modifications to mri_protocol table are not saved - **Describe the bug**
I access the MRI violations module as a user that has the permission to edit the MRI protocol. I make a valid edit of the MRI protocol table and the system asks if I am sure I want to proceed. I confirm that I want to and my modification seems to be saved but if I access another page (say, Access Profile) and go back to the MRI protocol edit page afterwards I see that it was not.
I think this happens no matter what column of the `mri_protocol` table you try to edit. | non_reli | modifications to mri protocol table are not saved describe the bug i access the mri violations module as a user that has the permission to edit the mri protocol i make a valid edit of the mri protocol table and the system asks if i am sure i want to proceed i confirm that i want to and my modification seems to be saved but if i access another page say access profile and go back to the mri protocol edit page afterwards i see that it was not i think this happens no matter what column of the mri protocol table you try to edit | 0 |
24,792 | 6,575,278,389 | IssuesEvent | 2017-09-11 15:34:33 | LukasKalbertodt/luten | https://api.github.com/repos/LukasKalbertodt/luten | closed | Preparation overview page & general settings | K-new-feature P-preparation-state S-preparation V-student W-code W-database W-design W-web | Students can:
- tick "Random partner" or specify a specific partner
- choose their preferred language (En or De)
The page also needs to show explanation what the student has to do. Additionally a "my status" box would be nice: it shows whether or not the student has done everything they have to.
These settings are then stored in the database. | 1.0 | Preparation overview page & general settings - Students can:
- tick "Random partner" or specify a specific partner
- choose their preferred language (En or De)
The page also needs to show explanation what the student has to do. Additionally a "my status" box would be nice: it shows whether or not the student has done everything they have to.
These settings are then stored in the database. | non_reli | preparation overview page general settings students can tick random partner or specify a specific partner choose their preferred language en or de the page also needs to show explanation what the student has to do additionally a my status box would be nice it shows whether or not the student has done everything they have to these settings are then stored in the database | 0 |
1,248 | 14,289,399,860 | IssuesEvent | 2020-11-23 19:11:53 | argoproj/argo | https://api.github.com/repos/argoproj/argo | closed | Better malformed resource handling | enhancement epic/reliability | # Summary
Currently, if you load an malformed resource (e.g. using `kubectl apply`) we have the following behaviour:
* The controller will ignore them malformed resource, just logging the error.
* The UI and CLI will error if you try to list the namespace the resource is in, or if you try to get the resource.
It is hard for the user to understand that they created it invalid as it is not clearly surfaced.
# Use Cases
How do we let the user know there is a problem?
* When you use your [Cluster]WorkflowTemplates - it will error with clear message (already today).
* Create a Kubernetes event (cheap).
* Add status information to the resource.
What do we do about the problem?
* Best effort un-marshall (dropping invalid fields), cheap too.
* Validating web-hook (needs 100% uptime).
* Get OpenAPI schema validation working.
---
<!-- Issue Author: Don't delete this message to encourage other users to support your issue! -->
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍. | True | Better malformed resource handling - # Summary
Currently, if you load an malformed resource (e.g. using `kubectl apply`) we have the following behaviour:
* The controller will ignore them malformed resource, just logging the error.
* The UI and CLI will error if you try to list the namespace the resource is in, or if you try to get the resource.
It is hard for the user to understand that they created it invalid as it is not clearly surfaced.
# Use Cases
How do we let the user know there is a problem?
* When you use your [Cluster]WorkflowTemplates - it will error with clear message (already today).
* Create a Kubernetes event (cheap).
* Add status information to the resource.
What do we do about the problem?
* Best effort un-marshall (dropping invalid fields), cheap too.
* Validating web-hook (needs 100% uptime).
* Get OpenAPI schema validation working.
---
<!-- Issue Author: Don't delete this message to encourage other users to support your issue! -->
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍. | reli | better malformed resource handling summary currently if you load an malformed resource e g using kubectl apply we have the following behaviour the controller will ignore them malformed resource just logging the error the ui and cli will error if you try to list the namespace the resource is in or if you try to get the resource it is hard for the user to understand that they created it invalid as it is not clearly surfaced use cases how do we let the user know there is a problem when you use your workflowtemplates it will error with clear message already today create a kubernetes event cheap add status information to the resource what do we do about the problem best effort un marshall dropping invalid fields cheap too validating web hook needs uptime get openapi schema validation working message from the maintainers impacted by this bug give it a 👍 we prioritise the issues with the most 👍 | 1 |
85 | 2,504,543,906 | IssuesEvent | 2015-01-10 09:59:35 | bitcoin/bitcoin | https://api.github.com/repos/bitcoin/bitcoin | closed | configure error on OSX: No working boost sleep implementation found | Bug Build system Priority Low | Trying to compile git head on OSX 10.7.5 with ```./configure --without-qt --disable-tests --disable-debug```, it fails with:
checking whether the Boost::Chrono library is available... yes
checking for exit in -lboost_chrono-mt... yes
configure: error: No working boost sleep implementation found
I have boost installed with MacPorts:
> port installed | grep boost
boost @1.54.0_0+no_single+no_static+python27 (active) | 1.0 | configure error on OSX: No working boost sleep implementation found - Trying to compile git head on OSX 10.7.5 with ```./configure --without-qt --disable-tests --disable-debug```, it fails with:
checking whether the Boost::Chrono library is available... yes
checking for exit in -lboost_chrono-mt... yes
configure: error: No working boost sleep implementation found
I have boost installed with MacPorts:
> port installed | grep boost
boost @1.54.0_0+no_single+no_static+python27 (active) | non_reli | configure error on osx no working boost sleep implementation found trying to compile git head on osx with configure without qt disable tests disable debug it fails with checking whether the boost chrono library is available yes checking for exit in lboost chrono mt yes configure error no working boost sleep implementation found i have boost installed with macports port installed grep boost boost no single no static active | 0 |
301 | 6,199,823,117 | IssuesEvent | 2017-07-05 22:40:09 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | SigHelp crashes on conditional indexer | Area-IDE Bug Tenet-Reliability | ```C#
using System.Collections.Generic;
class C
{
void Main()
{
List<int> x;
var y = x?[$$].Count
}
}
```
Crashes invoking sighelp at $$
| True | SigHelp crashes on conditional indexer - ```C#
using System.Collections.Generic;
class C
{
void Main()
{
List<int> x;
var y = x?[$$].Count
}
}
```
Crashes invoking sighelp at $$
| reli | sighelp crashes on conditional indexer c using system collections generic class c void main list x var y x count crashes invoking sighelp at | 1 |
1,422 | 16,015,651,943 | IssuesEvent | 2021-04-20 15:41:54 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | opened | Event Hubs: Investigate Stress test results | Azure.Core Event Hubs amqp tenet-reliability | Investigate findings in the stress test results/create bugs for them if they require more investment. Dependent on #20784 | True | Event Hubs: Investigate Stress test results - Investigate findings in the stress test results/create bugs for them if they require more investment. Dependent on #20784 | reli | event hubs investigate stress test results investigate findings in the stress test results create bugs for them if they require more investment dependent on | 1 |
739 | 10,256,364,457 | IssuesEvent | 2019-08-21 17:29:39 | lookit/lookit-api | https://api.github.com/repos/lookit/lookit-api | closed | Allow families to see their videos, & add some additional details to past studies | Participant planned recruitment/engagement reliability of data collection size: 10 | *Pain Point*: Once parents participate in a study, there is no immediate confirmation that it "worked" and Lookit has their data, or any later confirmation that it has been used to do cool science. They also do not have automatic access to their own data, although they often want to see their videos; instead, a researcher has to provide it if desired, which is labor-intensive and introduces unnecessary possibilities for human error (e.g. sending the wrong child's video).
*Acceptance Criteria*:
- Parents can download any video clips from a session from the "Past Studies" view when logged in. (Nice but not required if they can actually just see the clips embedded in the browser.)
- Parents can see some basic information about past study sessions, including which child participated, status of consent coding (yes/no/pending/no video(?)), research group PI/contact.
- Parents should *not* see sessions with completedConsentFrame=false displayed
- Parents can see the "status" of a study that they participated in, as provided by the researchers - e.g., data collection, analysis, writing up results, here's a link to the paper/media article/etc.
*Implementation notes/Suggestions*:
- We've discussed adding a freeform string field to the study model, editable by researchers in the study edit view, representing the current state of the study.
- Eventually (not as part of this ticket) parents might also be able to (going forward) view any study debriefing text from "Past Studies," since this may contain individual results or condition info. Having set up an "exit survey" frame type to support withdrawal of video will make this easier, in that we'd be able to see if there's a frame of type exit-survey in the expData for this response, and if so, fetch its debriefing text.
- Eventually (not as part of this ticket) parents might also be able to view/download the consent form text. This is now stored as part of the consent form in expData. | True | Allow families to see their videos, & add some additional details to past studies - *Pain Point*: Once parents participate in a study, there is no immediate confirmation that it "worked" and Lookit has their data, or any later confirmation that it has been used to do cool science. They also do not have automatic access to their own data, although they often want to see their videos; instead, a researcher has to provide it if desired, which is labor-intensive and introduces unnecessary possibilities for human error (e.g. sending the wrong child's video).
*Acceptance Criteria*:
- Parents can download any video clips from a session from the "Past Studies" view when logged in. (Nice but not required if they can actually just see the clips embedded in the browser.)
- Parents can see some basic information about past study sessions, including which child participated, status of consent coding (yes/no/pending/no video(?)), research group PI/contact.
- Parents should *not* see sessions with completedConsentFrame=false displayed
- Parents can see the "status" of a study that they participated in, as provided by the researchers - e.g., data collection, analysis, writing up results, here's a link to the paper/media article/etc.
*Implementation notes/Suggestions*:
- We've discussed adding a freeform string field to the study model, editable by researchers in the study edit view, representing the current state of the study.
- Eventually (not as part of this ticket) parents might also be able to (going forward) view any study debriefing text from "Past Studies," since this may contain individual results or condition info. Having set up an "exit survey" frame type to support withdrawal of video will make this easier, in that we'd be able to see if there's a frame of type exit-survey in the expData for this response, and if so, fetch its debriefing text.
- Eventually (not as part of this ticket) parents might also be able to view/download the consent form text. This is now stored as part of the consent form in expData. | reli | allow families to see their videos add some additional details to past studies pain point once parents participate in a study there is no immediate confirmation that it worked and lookit has their data or any later confirmation that it has been used to do cool science they also do not have automatic access to their own data although they often want to see their videos instead a researcher has to provide it if desired which is labor intensive and introduces unnecessary possibilities for human error e g sending the wrong child s video acceptance criteria parents can download any video clips from a session from the past studies view when logged in nice but not required if they can actually just see the clips embedded in the browser parents can see some basic information about past study sessions including which child participated status of consent coding yes no pending no video research group pi contact parents should not see sessions with completedconsentframe false displayed parents can see the status of a study that they participated in as provided by the researchers e g data collection analysis writing up results here s a link to the paper media article etc implementation notes suggestions we ve discussed adding a freeform string field to the study model editable by researchers in the study edit view representing the current state of the study eventually not as part of this ticket parents might also be able to going forward view any study debriefing text from past studies since this may contain individual results or condition info having set up an exit survey frame type to support withdrawal of video will make this easier in that we d be able to see if there s a frame of type exit survey in the expdata for this response and if so fetch its debriefing text eventually not as part of this ticket parents might also be able to view download the consent form text this is now stored as part of the consent form in expdata | 1 |
15,655 | 5,164,736,280 | IssuesEvent | 2017-01-17 11:27:36 | punker76/gong-wpf-dragdrop | https://api.github.com/repos/punker76/gong-wpf-dragdrop | closed | Selecting an item and the mouse quickly changes selection before drag is started | bug imported from google code | _Original author: gro...@gmail.com (November 07, 2009 19:14:39)_
When clicking an item and moving the mouse quickly, rather than a drag on the
item that was clicked on being dragged, sometimes the selection changes
before the drag is started. This is particularly noticeable if clicking the
item causes the UI to lock up for a short time.
_Original issue: http://code.google.com/p/gong-wpf-dragdrop/issues/detail?id=1_
| 1.0 | Selecting an item and the mouse quickly changes selection before drag is started - _Original author: gro...@gmail.com (November 07, 2009 19:14:39)_
When clicking an item and moving the mouse quickly, rather than a drag on the
item that was clicked on being dragged, sometimes the selection changes
before the drag is started. This is particularly noticeable if clicking the
item causes the UI to lock up for a short time.
_Original issue: http://code.google.com/p/gong-wpf-dragdrop/issues/detail?id=1_
| non_reli | selecting an item and the mouse quickly changes selection before drag is started original author gro gmail com november when clicking an item and moving the mouse quickly rather than a drag on the item that was clicked on being dragged sometimes the selection changes before the drag is started this is particularly noticeable if clicking the item causes the ui to lock up for a short time original issue | 0 |
566,106 | 16,796,141,777 | IssuesEvent | 2021-06-16 04:02:02 | woocommerce/google-listings-and-ads | https://api.github.com/repos/woocommerce/google-listings-and-ads | opened | Global Offers | priority: high type: enhancement type: epic | Currently GLA [syncs all product all target countries ](https://github.com/woocommerce/google-listings-and-ads/pull/319)
This approach works but creates a lot of overhead/resource consumption such as API requests, product counts & a real risk we'll keep hitting our quotas quickly (a small number of merchants can use up the quota if they have a large number of products and select to target all countries).
The alternative approach is to move to "Global Offers" - Global Offers makes it easy to list products in multiple countries without needing to upload the products to each country (this was beta when initially discussing the project and there was some disconnect/confusion about using this as the go-to/default approach when working on https://github.com/woocommerce/google-listings-and-ads/pull/399).
The change we need to make is essentially
* We no longer need to submit products for each country
* We only need to submit the list of shipping countries (an enhancement to https://github.com/woocommerce/google-listings-and-ads/pull/399)
* The product will be displayed in all of the Shipping countries regardless of the target country that it is submitted to
At face value - it seems simple enough - but I know we are doing are a lot of juggling under the hood.
Some of the questions that came to mind when discussing internally for reference
> Will the target country still be relevant after these changes? I mean what is the difference of a product submitted in all countries vs. a product that’s submitted in one country but SHIPS to all countries?
Based on discussions this morning my understanding is we can actually think of the "target country" more as the country of sale with shipping to multiple countries.
In the UI target country will be mapped to "Country of Sale" then in the Program and Status columns will see multiple rows for each shipping country.
<img width="1242" alt="Markup 2021-06-16 at 13 04 00" src="https://user-images.githubusercontent.com/355014/122154142-74d46600-cea3-11eb-98ab-b3846de0a5df.png">
> I assume that after this change we will only submit a product once for the shop’s current country (set in Woo settings) and then set the shipping based on the target country settings. This will mean that if their API doesn’t change, we will have only one ID and one synced product to deal with.
Correct, the Google team confirmed it would make sense to use the store location for the "target country" product attribute then set shipping based all the "target countries" the merchant selects during onboarding.
> We could store the target countries as a separate meta and assume that they all have the same ID.
> Or we can just store the same ID for each target country and continue using the same structure that we have now.
> I would go with the first method to separate the concepts of target countries and shipping countries.
At the moment we have `_wc_gla_google_ids` meta which is a serialized array of Ids e.g. for a single target country it looks like `a:1:{s:2:"AU";s:19:"online:en:AU:gla_85";}`
"We could store the target countries as a separate meta" - might not even need an additional meta as we'll already have `gla_target_audience` option the store country in options as well.
Un-educated thought - we could probably move to something simple like `_wc_gla_google_id` (singular) and `online:en:AU:gla_85` - so yes same/single ID for the product being uploaded (we won't have multiple IDs to track anymore).
**Notes**
* variations are still handled individually - that hasn't changed.
* I am assuming there are going to flow on changes for the product feed summary, issues, and table as a result of this - but this might help reduce some of the performance impacts cc @layoutd
* impact on existing offers - we might need to look at running a migration job to clean up products that have been pushed up - but I'll wait to hear the teams thoughts.
**Reference pull requests**
* [Sync products for all target countries](https://github.com/woocommerce/google-listings-and-ads/pull/319)
* [Set Product Shipping Information Based on Target Country](https://github.com/woocommerce/google-listings-and-ads/pull/399) | 1.0 | Global Offers - Currently GLA [syncs all product all target countries ](https://github.com/woocommerce/google-listings-and-ads/pull/319)
This approach works but creates a lot of overhead/resource consumption such as API requests, product counts & a real risk we'll keep hitting our quotas quickly (a small number of merchants can use up the quota if they have a large number of products and select to target all countries).
The alternative approach is to move to "Global Offers" - Global Offers makes it easy to list products in multiple countries without needing to upload the products to each country (this was beta when initially discussing the project and there was some disconnect/confusion about using this as the go-to/default approach when working on https://github.com/woocommerce/google-listings-and-ads/pull/399).
The change we need to make is essentially
* We no longer need to submit products for each country
* We only need to submit the list of shipping countries (an enhancement to https://github.com/woocommerce/google-listings-and-ads/pull/399)
* The product will be displayed in all of the Shipping countries regardless of the target country that it is submitted to
At face value - it seems simple enough - but I know we are doing are a lot of juggling under the hood.
Some of the questions that came to mind when discussing internally for reference
> Will the target country still be relevant after these changes? I mean what is the difference of a product submitted in all countries vs. a product that’s submitted in one country but SHIPS to all countries?
Based on discussions this morning my understanding is we can actually think of the "target country" more as the country of sale with shipping to multiple countries.
In the UI target country will be mapped to "Country of Sale" then in the Program and Status columns will see multiple rows for each shipping country.
<img width="1242" alt="Markup 2021-06-16 at 13 04 00" src="https://user-images.githubusercontent.com/355014/122154142-74d46600-cea3-11eb-98ab-b3846de0a5df.png">
> I assume that after this change we will only submit a product once for the shop’s current country (set in Woo settings) and then set the shipping based on the target country settings. This will mean that if their API doesn’t change, we will have only one ID and one synced product to deal with.
Correct, the Google team confirmed it would make sense to use the store location for the "target country" product attribute then set shipping based all the "target countries" the merchant selects during onboarding.
> We could store the target countries as a separate meta and assume that they all have the same ID.
> Or we can just store the same ID for each target country and continue using the same structure that we have now.
> I would go with the first method to separate the concepts of target countries and shipping countries.
At the moment we have `_wc_gla_google_ids` meta which is a serialized array of Ids e.g. for a single target country it looks like `a:1:{s:2:"AU";s:19:"online:en:AU:gla_85";}`
"We could store the target countries as a separate meta" - might not even need an additional meta as we'll already have `gla_target_audience` option the store country in options as well.
Un-educated thought - we could probably move to something simple like `_wc_gla_google_id` (singular) and `online:en:AU:gla_85` - so yes same/single ID for the product being uploaded (we won't have multiple IDs to track anymore).
**Notes**
* variations are still handled individually - that hasn't changed.
* I am assuming there are going to flow on changes for the product feed summary, issues, and table as a result of this - but this might help reduce some of the performance impacts cc @layoutd
* impact on existing offers - we might need to look at running a migration job to clean up products that have been pushed up - but I'll wait to hear the teams thoughts.
**Reference pull requests**
* [Sync products for all target countries](https://github.com/woocommerce/google-listings-and-ads/pull/319)
* [Set Product Shipping Information Based on Target Country](https://github.com/woocommerce/google-listings-and-ads/pull/399) | non_reli | global offers currently gla this approach works but creates a lot of overhead resource consumption such as api requests product counts a real risk we ll keep hitting our quotas quickly a small number of merchants can use up the quota if they have a large number of products and select to target all countries the alternative approach is to move to global offers global offers makes it easy to list products in multiple countries without needing to upload the products to each country this was beta when initially discussing the project and there was some disconnect confusion about using this as the go to default approach when working on the change we need to make is essentially we no longer need to submit products for each country we only need to submit the list of shipping countries an enhancement to the product will be displayed in all of the shipping countries regardless of the target country that it is submitted to at face value it seems simple enough but i know we are doing are a lot of juggling under the hood some of the questions that came to mind when discussing internally for reference will the target country still be relevant after these changes i mean what is the difference of a product submitted in all countries vs a product that’s submitted in one country but ships to all countries based on discussions this morning my understanding is we can actually think of the target country more as the country of sale with shipping to multiple countries in the ui target country will be mapped to country of sale then in the program and status columns will see multiple rows for each shipping country img width alt markup at src i assume that after this change we will only submit a product once for the shop’s current country set in woo settings and then set the shipping based on the target country settings this will mean that if their api doesn’t change we will have only one id and one synced product to deal with correct the google team confirmed it would make sense to use the store location for the target country product attribute then set shipping based all the target countries the merchant selects during onboarding we could store the target countries as a separate meta and assume that they all have the same id or we can just store the same id for each target country and continue using the same structure that we have now i would go with the first method to separate the concepts of target countries and shipping countries at the moment we have wc gla google ids meta which is a serialized array of ids e g for a single target country it looks like a s au s online en au gla we could store the target countries as a separate meta might not even need an additional meta as we ll already have gla target audience option the store country in options as well un educated thought we could probably move to something simple like wc gla google id singular and online en au gla so yes same single id for the product being uploaded we won t have multiple ids to track anymore notes variations are still handled individually that hasn t changed i am assuming there are going to flow on changes for the product feed summary issues and table as a result of this but this might help reduce some of the performance impacts cc layoutd impact on existing offers we might need to look at running a migration job to clean up products that have been pushed up but i ll wait to hear the teams thoughts reference pull requests | 0 |
66,169 | 16,552,721,908 | IssuesEvent | 2021-05-28 10:25:18 | apache/shardingsphere | https://api.github.com/repos/apache/shardingsphere | closed | Use different ports between test cases to support Maven parallel execution | good first issue in: test type: build | ## Feature Request
### Is your feature request related to a problem?
When execute Maven command in parallel, test cases failed.
```bash
mvn clean install -T1C
```
```
[ERROR] 2021-05-26 19:39:11.162 [Thread-1] o.a.c.test.TestingZooKeeperServer - From testing server (random state: false) for instance: InstanceSpec{dataDirectory=target/test_zk_data/121959016302313, port=3181, electionPort=64198, quorumPort=64199, deleteDataDirectoryOnClose=true, serverId=1, tickTime=-1, maxClientCnxns=-1, customProperties={}, hostname=127.0.0.1} org.apache.curator.test.InstanceSpec@59c3c1ea
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:438)
at sun.nio.ch.Net.bind(Net.java:430)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:225)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:676)
at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:109)
at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:105)
at org.apache.curator.test.TestingZooKeeperMain.internalRunFromConfig(TestingZooKeeperMain.java:248)
at org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:132)
at org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:158)
at java.lang.Thread.run(Thread.java:748)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.22.RELEASE)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.22.RELEASE)
line 1:7 no viable alternative at input 'CREATESHARDING'
line 1:7 no viable alternative at input 'CREATESHARDING'
line 1:15 no viable alternative at input 'CREATESHARDING'
line 1:15 no viable alternative at input 'CREATESHARDING'
line 1:6 no viable alternative at input 'SELECT'
line 1:6 no viable alternative at input 'SELECT'
line 1:0 no viable alternative at input 'SELECT'
line 1:0 no viable alternative at input 'SELECT'
19:39:53.157 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.236 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.302 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.284 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.360 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.398 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:56.288 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:56.289 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:56.289 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
May 26, 2021 7:39:58 PM zipkin2.reporter.AsyncReporter$BoundedAsyncReporter close
WARNING: Timed out waiting for in-flight spans to send
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.147 s <<< FAILURE! - in org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest
[ERROR] org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest Time elapsed: 51.147 s <<< ERROR!
java.lang.RuntimeException: org.apache.curator.test.FailedServerStartException: Timed out waiting for server startup
at org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest.init(GovernanceEncryptNamespaceTest.java:48)
Caused by: org.apache.curator.test.FailedServerStartException: Timed out waiting for server startup
at org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest.init(GovernanceEncryptNamespaceTest.java:48)
[ERROR] Errors:
[ERROR] GovernanceEncryptNamespaceTest.init:48 » Runtime org.apache.curator.test.Faile...
[ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 1
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.0:test (default-test) on project shardingsphere-jdbc-governance-spring-namespace: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/wuweijie/IdeaProjects/shardingsphere/shardingsphere-jdbc/shardingsphere-jdbc-spring/shardingsphere-jdbc-governance-spring/shardingsphere-jdbc-governance-spring-namespace/target/surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :shardingsphere-jdbc-governance-spring-namespace
Process finished with exit code 1
```
### Describe the feature you would like.
Use different ports in different test cases when using embedded ZooKeeper server.
### How?
Make sure ports are different:
* org.apache.shardingsphere.scaling.core.fixture.EmbedTestingServer
* org.apache.shardingsphere.spring.namespace.governance.util.EmbedTestingServer
* org.apache.shardingsphere.spring.boot.governance.util.EmbedTestingServer | 1.0 | Use different ports between test cases to support Maven parallel execution - ## Feature Request
### Is your feature request related to a problem?
When execute Maven command in parallel, test cases failed.
```bash
mvn clean install -T1C
```
```
[ERROR] 2021-05-26 19:39:11.162 [Thread-1] o.a.c.test.TestingZooKeeperServer - From testing server (random state: false) for instance: InstanceSpec{dataDirectory=target/test_zk_data/121959016302313, port=3181, electionPort=64198, quorumPort=64199, deleteDataDirectoryOnClose=true, serverId=1, tickTime=-1, maxClientCnxns=-1, customProperties={}, hostname=127.0.0.1} org.apache.curator.test.InstanceSpec@59c3c1ea
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:438)
at sun.nio.ch.Net.bind(Net.java:430)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:225)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:676)
at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:109)
at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:105)
at org.apache.curator.test.TestingZooKeeperMain.internalRunFromConfig(TestingZooKeeperMain.java:248)
at org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:132)
at org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:158)
at java.lang.Thread.run(Thread.java:748)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.22.RELEASE)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.22.RELEASE)
line 1:7 no viable alternative at input 'CREATESHARDING'
line 1:7 no viable alternative at input 'CREATESHARDING'
line 1:15 no viable alternative at input 'CREATESHARDING'
line 1:15 no viable alternative at input 'CREATESHARDING'
line 1:6 no viable alternative at input 'SELECT'
line 1:6 no viable alternative at input 'SELECT'
line 1:0 no viable alternative at input 'SELECT'
line 1:0 no viable alternative at input 'SELECT'
19:39:53.157 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.236 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.302 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.284 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.360 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.398 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.814 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:53.940 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:56.288 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:56.289 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:56.289 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.executor.sql.execute.engine.driver.jdbc.JDBCExecutorCallback
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.infra.parser.ShardingSphereSQLParserEngine
19:39:56.294 [main] INFO org.apache.shardingsphere.agent.plugin.tracing.AgentRunner - It is successful to enhance the class org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask
May 26, 2021 7:39:58 PM zipkin2.reporter.AsyncReporter$BoundedAsyncReporter close
WARNING: Timed out waiting for in-flight spans to send
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.147 s <<< FAILURE! - in org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest
[ERROR] org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest Time elapsed: 51.147 s <<< ERROR!
java.lang.RuntimeException: org.apache.curator.test.FailedServerStartException: Timed out waiting for server startup
at org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest.init(GovernanceEncryptNamespaceTest.java:48)
Caused by: org.apache.curator.test.FailedServerStartException: Timed out waiting for server startup
at org.apache.shardingsphere.spring.namespace.governance.GovernanceEncryptNamespaceTest.init(GovernanceEncryptNamespaceTest.java:48)
[ERROR] Errors:
[ERROR] GovernanceEncryptNamespaceTest.init:48 » Runtime org.apache.curator.test.Faile...
[ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 1
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.0:test (default-test) on project shardingsphere-jdbc-governance-spring-namespace: There are test failures.
[ERROR]
[ERROR] Please refer to /Users/wuweijie/IdeaProjects/shardingsphere/shardingsphere-jdbc/shardingsphere-jdbc-spring/shardingsphere-jdbc-governance-spring/shardingsphere-jdbc-governance-spring-namespace/target/surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :shardingsphere-jdbc-governance-spring-namespace
Process finished with exit code 1
```
### Describe the feature you would like.
Use different ports in different test cases when using embedded ZooKeeper server.
### How?
Make sure ports are different:
* org.apache.shardingsphere.scaling.core.fixture.EmbedTestingServer
* org.apache.shardingsphere.spring.namespace.governance.util.EmbedTestingServer
* org.apache.shardingsphere.spring.boot.governance.util.EmbedTestingServer | non_reli | use different ports between test cases to support maven parallel execution feature request is your feature request related to a problem when execute maven command in parallel test cases failed bash mvn clean install o a c test testingzookeeperserver from testing server random state false for instance instancespec datadirectory target test zk data port electionport quorumport deletedatadirectoryonclose true serverid ticktime maxclientcnxns customproperties hostname org apache curator test instancespec java net bindexception address already in use at sun nio ch net native method at sun nio ch net bind net java at sun nio ch net bind net java at sun nio ch serversocketchannelimpl bind serversocketchannelimpl java at sun nio ch serversocketadaptor bind serversocketadaptor java at sun nio ch serversocketadaptor bind serversocketadaptor java at org apache zookeeper server nioservercnxnfactory configure nioservercnxnfactory java at org apache zookeeper server servercnxnfactory configure servercnxnfactory java at org apache zookeeper server servercnxnfactory configure servercnxnfactory java at org apache curator test testingzookeepermain internalrunfromconfig testingzookeepermain java at org apache curator test testingzookeepermain runfromconfig testingzookeepermain java at org apache curator test testingzookeeperserver run testingzookeeperserver java at java lang thread run thread java spring boot release spring boot release line no viable alternative at input createsharding line no viable alternative at input createsharding line no viable alternative at input createsharding line no viable alternative at input createsharding line no viable alternative at input select line no viable alternative at input select line no viable alternative at input select line no viable alternative at input select info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra executor sql execute engine driver jdbc jdbcexecutorcallback info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere infra parser shardingspheresqlparserengine info org apache shardingsphere agent plugin tracing agentrunner it is successful to enhance the class org apache shardingsphere proxy frontend command commandexecutortask may pm reporter asyncreporter boundedasyncreporter close warning timed out waiting for in flight spans to send tests run failures errors skipped time elapsed s failure in org apache shardingsphere spring namespace governance governanceencryptnamespacetest org apache shardingsphere spring namespace governance governanceencryptnamespacetest time elapsed s error java lang runtimeexception org apache curator test failedserverstartexception timed out waiting for server startup at org apache shardingsphere spring namespace governance governanceencryptnamespacetest init governanceencryptnamespacetest java caused by org apache curator test failedserverstartexception timed out waiting for server startup at org apache shardingsphere spring namespace governance governanceencryptnamespacetest init governanceencryptnamespacetest java errors governanceencryptnamespacetest init » runtime org apache curator test faile tests run failures errors skipped failed to execute goal org apache maven plugins maven surefire plugin test default test on project shardingsphere jdbc governance spring namespace there are test failures please refer to users wuweijie ideaprojects shardingsphere shardingsphere jdbc shardingsphere jdbc spring shardingsphere jdbc governance spring shardingsphere jdbc governance spring namespace target surefire reports for the individual test results please refer to dump files if any exist jvmrun dump dumpstream and jvmrun dumpstream to see the full stack trace of the errors re run maven with the e switch re run maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles after correcting the problems you can resume the build with the command mvn rf shardingsphere jdbc governance spring namespace process finished with exit code describe the feature you would like use different ports in different test cases when using embedded zookeeper server how make sure ports are different org apache shardingsphere scaling core fixture embedtestingserver org apache shardingsphere spring namespace governance util embedtestingserver org apache shardingsphere spring boot governance util embedtestingserver | 0 |
1,835 | 20,257,477,710 | IssuesEvent | 2022-02-15 01:39:23 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | [BUG] OutOfMemoryError only get events from Azure EventHub | question Event Hubs Client customer-reported pillar-reliability needs-author-feedback no-recent-activity | My application consumes memory up to the limit, even just getting the events from the hub and taking no action.
java version: 16
azure-messaging-eventhubs:5.10.1
azure-messaging-eventhubs-checkpointstore-blob:1.10.0
I have two consumers as per the following code:


The image below is the use of memory after some minutes running the program:

| True | [BUG] OutOfMemoryError only get events from Azure EventHub - My application consumes memory up to the limit, even just getting the events from the hub and taking no action.
java version: 16
azure-messaging-eventhubs:5.10.1
azure-messaging-eventhubs-checkpointstore-blob:1.10.0
I have two consumers as per the following code:


The image below is the use of memory after some minutes running the program:

| reli | outofmemoryerror only get events from azure eventhub my application consumes memory up to the limit even just getting the events from the hub and taking no action java version azure messaging eventhubs azure messaging eventhubs checkpointstore blob i have two consumers as per the following code the image below is the use of memory after some minutes running the program | 1 |
231,098 | 7,623,669,061 | IssuesEvent | 2018-05-03 15:37:15 | quipucords/quipucords | https://api.github.com/repos/quipucords/quipucords | closed | Investigate Process.terminate issue in docker container | bug priority - high | ## Specify type:
- Bug
### Priority:
- High
___
## Description:
This is a possible bug. QE logs have shown instances where the pause/cancel task doesn't stop processing even though the python Process.terminate() method has been called. This issue is to investigate whether the pause/cancel work differently when run in a docker container.
___
## Acceptance Criteria:
- [ ] Verify that pause/cancel work in a docker container
| 1.0 | Investigate Process.terminate issue in docker container - ## Specify type:
- Bug
### Priority:
- High
___
## Description:
This is a possible bug. QE logs have shown instances where the pause/cancel task doesn't stop processing even though the python Process.terminate() method has been called. This issue is to investigate whether the pause/cancel work differently when run in a docker container.
___
## Acceptance Criteria:
- [ ] Verify that pause/cancel work in a docker container
| non_reli | investigate process terminate issue in docker container specify type bug priority high description this is a possible bug qe logs have shown instances where the pause cancel task doesn t stop processing even though the python process terminate method has been called this issue is to investigate whether the pause cancel work differently when run in a docker container acceptance criteria verify that pause cancel work in a docker container | 0 |
210 | 5,378,548,295 | IssuesEvent | 2017-02-23 15:13:29 | LeastAuthority/leastauthority.com | https://api.github.com/repos/LeastAuthority/leastauthority.com | closed | implement DKIM, to further reduce perceived spamminess of email we send | back burner enhancement reliability signup | Implementing DKIM requires signing emails (without any of the end-to-end value of signing) and is generally much more involved than SPF (issue #126), so I propose to punt on it unless and until we have evidence it's needed.
| True | implement DKIM, to further reduce perceived spamminess of email we send - Implementing DKIM requires signing emails (without any of the end-to-end value of signing) and is generally much more involved than SPF (issue #126), so I propose to punt on it unless and until we have evidence it's needed.
| reli | implement dkim to further reduce perceived spamminess of email we send implementing dkim requires signing emails without any of the end to end value of signing and is generally much more involved than spf issue so i propose to punt on it unless and until we have evidence it s needed | 1 |
224,429 | 24,773,349,343 | IssuesEvent | 2022-10-23 12:28:58 | sast-automation-dev/easybuggy4django-41 | https://api.github.com/repos/sast-automation-dev/easybuggy4django-41 | opened | Pillow-5.1.0.tar.gz: 21 vulnerabilities (highest severity is: 9.8) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Pillow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-25289](https://www.mend.io/vulnerability-database/CVE-2021-25289) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2020-5311](https://www.mend.io/vulnerability-database/CVE-2020-5311) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-5312](https://www.mend.io/vulnerability-database/CVE-2020-5312) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-5310](https://www.mend.io/vulnerability-database/CVE-2020-5310) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-11538](https://www.mend.io/vulnerability-database/CVE-2020-11538) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10379](https://www.mend.io/vulnerability-database/CVE-2020-10379) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2019-19911](https://www.mend.io/vulnerability-database/CVE-2019-19911) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2021-27923](https://www.mend.io/vulnerability-database/CVE-2021-27923) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2019-16865](https://www.mend.io/vulnerability-database/CVE-2019-16865) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 6.2.0 | ✅ |
| [CVE-2021-25290](https://www.mend.io/vulnerability-database/CVE-2021-25290) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-25291](https://www.mend.io/vulnerability-database/CVE-2021-25291) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-25293](https://www.mend.io/vulnerability-database/CVE-2021-25293) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-27921](https://www.mend.io/vulnerability-database/CVE-2021-27921) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2021-27922](https://www.mend.io/vulnerability-database/CVE-2021-27922) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2020-35653](https://www.mend.io/vulnerability-database/CVE-2020-35653) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.1 | Pillow-5.1.0.tar.gz | Direct | 8.1.0 | ✅ |
| [CVE-2020-5313](https://www.mend.io/vulnerability-database/CVE-2020-5313) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.1 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2021-25292](https://www.mend.io/vulnerability-database/CVE-2021-25292) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2020-10994](https://www.mend.io/vulnerability-database/CVE-2020-10994) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10378](https://www.mend.io/vulnerability-database/CVE-2020-10378) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10177](https://www.mend.io/vulnerability-database/CVE-2020-10177) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-35655](https://www.mend.io/vulnerability-database/CVE-2020-35655) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.4 | Pillow-5.1.0.tar.gz | Direct | 8.1.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25289</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. TiffDecode has a heap-based buffer overflow when decoding crafted YCbCr files because of certain interpretation conflicts with LibTIFF in RGBA mode. NOTE: this issue exists because of an incomplete fix for CVE-2020-35654.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25289>CVE-2021-25289</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5311</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/SgiRleDecode.c in Pillow before 6.2.2 has an SGI buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5311>CVE-2020-5311</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5311">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5311</a></p>
<p>Release Date: Jul 10, 2020 5:06:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5312</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/PcxDecode.c in Pillow before 6.2.2 has a PCX P mode buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5312>CVE-2020-5312</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5312">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5312</a></p>
<p>Release Date: Jul 10, 2020 5:09:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5310</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/TiffDecode.c in Pillow before 6.2.2 has a TIFF decoding integer overflow, related to realloc.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5310>CVE-2020-5310</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5310">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5310</a></p>
<p>Release Date: Jan 31, 2020 4:15:00 AM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-11538</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/SgiRleDecode.c in Pillow through 7.0.0, a number of out-of-bounds reads exist in the parsing of SGI image files, a different issue than CVE-2020-5311.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11538>CVE-2020-11538</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-10379</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 7.1.0, there are two Buffer Overflows in libImaging/TiffDecode.c.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10379>CVE-2020-10379</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-19911</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
There is a DoS vulnerability in Pillow before 6.2.2 caused by FpxImagePlugin.py calling the range function on an unvalidated 32-bit integer if the number of bands is large. On Windows running 32-bit Python, this results in an OverflowError or MemoryError due to the 2 GB limit. However, on Linux running 64-bit Python this results in the process being terminated by the OOM killer.
<p>Publish Date: Jan 5, 2020 10:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19911>CVE-2019-19911</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Aug 24, 2020 5:37:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27923</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27923>CVE-2021-27923</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-16865</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 6.2.0. When reading specially crafted invalid image files, the library can either allocate very large amounts of memory or take an extremely long period of time to process the image.
<p>Publish Date: Oct 4, 2019 10:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-16865>CVE-2019-16865</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865</a></p>
<p>Release Date: Feb 18, 2020 4:15:00 PM</p>
<p>Fix Resolution: 6.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25290</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25291</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is an out-of-bounds read in TiffreadRGBATile via invalid tile boundaries.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25291>CVE-2021-25291</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25293</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. There is an out-of-bounds read in SGIRleDecode.c.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25293>CVE-2021-25293</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27921</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for a BLP container, and thus an attempted memory allocation can be very large.
WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of Pillow up to version 8.1.1 are vulnerable to CVE-2021-27921.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27921>CVE-2021-27921</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27922</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICNS container, and thus an attempted memory allocation can be very large.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27922>CVE-2021-27922</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-35653</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 8.1.0, PcxDecode has a buffer over-read when decoding a crafted PCX file because the user-supplied stride value is trusted for buffer calculations.
<p>Publish Date: Jan 12, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35653>CVE-2020-35653</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35653">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35653</a></p>
<p>Release Date: Jan 12, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5313</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/FliDecode.c in Pillow before 6.2.2 has an FLI buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5313>CVE-2020-5313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5313</a></p>
<p>Release Date: Feb 18, 2020 4:15:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-25292</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. The PDF parser allows a regular expression DoS (ReDoS) attack via a crafted PDF file because of a catastrophic backtracking regex.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25292>CVE-2021-25292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10994</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/Jpeg2KDecode.c in Pillow before 7.1.0, there are multiple out-of-bounds reads via a crafted JP2 file.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10994>CVE-2020-10994</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10378</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/PcxDecode.c in Pillow before 7.1.0, an out-of-bounds read can occur when reading PCX files where state->shuffle is instructed to read beyond state->buffer.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10378>CVE-2020-10378</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10177</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 7.1.0 has multiple out-of-bounds reads in libImaging/FliDecode.c.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10177>CVE-2020-10177</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Aug 8, 2020 8:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-35655</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled.
<p>Publish Date: Jan 12, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35655>CVE-2020-35655</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.4</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p>
<p>Release Date: Jan 12, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | Pillow-5.1.0.tar.gz: 21 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Pillow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-25289](https://www.mend.io/vulnerability-database/CVE-2021-25289) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2020-5311](https://www.mend.io/vulnerability-database/CVE-2020-5311) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-5312](https://www.mend.io/vulnerability-database/CVE-2020-5312) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-5310](https://www.mend.io/vulnerability-database/CVE-2020-5310) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2020-11538](https://www.mend.io/vulnerability-database/CVE-2020-11538) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10379](https://www.mend.io/vulnerability-database/CVE-2020-10379) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2019-19911](https://www.mend.io/vulnerability-database/CVE-2019-19911) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2021-27923](https://www.mend.io/vulnerability-database/CVE-2021-27923) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2019-16865](https://www.mend.io/vulnerability-database/CVE-2019-16865) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 6.2.0 | ✅ |
| [CVE-2021-25290](https://www.mend.io/vulnerability-database/CVE-2021-25290) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-25291](https://www.mend.io/vulnerability-database/CVE-2021-25291) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-25293](https://www.mend.io/vulnerability-database/CVE-2021-25293) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2021-27921](https://www.mend.io/vulnerability-database/CVE-2021-27921) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2021-27922](https://www.mend.io/vulnerability-database/CVE-2021-27922) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.2 | ✅ |
| [CVE-2020-35653](https://www.mend.io/vulnerability-database/CVE-2020-35653) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.1 | Pillow-5.1.0.tar.gz | Direct | 8.1.0 | ✅ |
| [CVE-2020-5313](https://www.mend.io/vulnerability-database/CVE-2020-5313) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.1 | Pillow-5.1.0.tar.gz | Direct | 6.2.2 | ✅ |
| [CVE-2021-25292](https://www.mend.io/vulnerability-database/CVE-2021-25292) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | Pillow-5.1.0.tar.gz | Direct | 8.1.1 | ✅ |
| [CVE-2020-10994](https://www.mend.io/vulnerability-database/CVE-2020-10994) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10378](https://www.mend.io/vulnerability-database/CVE-2020-10378) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-10177](https://www.mend.io/vulnerability-database/CVE-2020-10177) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | Pillow-5.1.0.tar.gz | Direct | 7.1.0 | ✅ |
| [CVE-2020-35655](https://www.mend.io/vulnerability-database/CVE-2020-35655) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.4 | Pillow-5.1.0.tar.gz | Direct | 8.1.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25289</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. TiffDecode has a heap-based buffer overflow when decoding crafted YCbCr files because of certain interpretation conflicts with LibTIFF in RGBA mode. NOTE: this issue exists because of an incomplete fix for CVE-2020-35654.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25289>CVE-2021-25289</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5311</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/SgiRleDecode.c in Pillow before 6.2.2 has an SGI buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5311>CVE-2020-5311</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5311">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5311</a></p>
<p>Release Date: Jul 10, 2020 5:06:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5312</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/PcxDecode.c in Pillow before 6.2.2 has a PCX P mode buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5312>CVE-2020-5312</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5312">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5312</a></p>
<p>Release Date: Jul 10, 2020 5:09:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5310</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/TiffDecode.c in Pillow before 6.2.2 has a TIFF decoding integer overflow, related to realloc.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5310>CVE-2020-5310</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5310">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5310</a></p>
<p>Release Date: Jan 31, 2020 4:15:00 AM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-11538</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/SgiRleDecode.c in Pillow through 7.0.0, a number of out-of-bounds reads exist in the parsing of SGI image files, a different issue than CVE-2020-5311.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11538>CVE-2020-11538</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-10379</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 7.1.0, there are two Buffer Overflows in libImaging/TiffDecode.c.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10379>CVE-2020-10379</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-19911</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
There is a DoS vulnerability in Pillow before 6.2.2 caused by FpxImagePlugin.py calling the range function on an unvalidated 32-bit integer if the number of bands is large. On Windows running 32-bit Python, this results in an OverflowError or MemoryError due to the 2 GB limit. However, on Linux running 64-bit Python this results in the process being terminated by the OOM killer.
<p>Publish Date: Jan 5, 2020 10:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19911>CVE-2019-19911</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Aug 24, 2020 5:37:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27923</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICO container, and thus an attempted memory allocation can be very large.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27923>CVE-2021-27923</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-16865</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 6.2.0. When reading specially crafted invalid image files, the library can either allocate very large amounts of memory or take an extremely long period of time to process the image.
<p>Publish Date: Oct 4, 2019 10:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-16865>CVE-2019-16865</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16865</a></p>
<p>Release Date: Feb 18, 2020 4:15:00 PM</p>
<p>Fix Resolution: 6.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25290</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25291</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is an out-of-bounds read in TiffreadRGBATile via invalid tile boundaries.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25291>CVE-2021-25291</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25293</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. There is an out-of-bounds read in SGIRleDecode.c.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25293>CVE-2021-25293</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27921</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for a BLP container, and thus an attempted memory allocation can be very large.
WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of Pillow up to version 8.1.1 are vulnerable to CVE-2021-27921.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27921>CVE-2021-27921</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27922</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 8.1.1 allows attackers to cause a denial of service (memory consumption) because the reported size of a contained image is not properly checked for an ICNS container, and thus an attempted memory allocation can be very large.
<p>Publish Date: Mar 3, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27922>CVE-2021-27922</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.2.html</a></p>
<p>Release Date: Mar 3, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-35653</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 8.1.0, PcxDecode has a buffer over-read when decoding a crafted PCX file because the user-supplied stride value is trusted for buffer calculations.
<p>Publish Date: Jan 12, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35653>CVE-2020-35653</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35653">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35653</a></p>
<p>Release Date: Jan 12, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5313</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
libImaging/FliDecode.c in Pillow before 6.2.2 has an FLI buffer overflow.
<p>Publish Date: Jan 3, 2020 1:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-5313>CVE-2020-5313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5313</a></p>
<p>Release Date: Feb 18, 2020 4:15:00 PM</p>
<p>Fix Resolution: 6.2.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-25292</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in Pillow before 8.1.1. The PDF parser allows a regular expression DoS (ReDoS) attack via a crafted PDF file because of a catastrophic backtracking regex.
<p>Publish Date: Mar 19, 2021 4:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-25292>CVE-2021-25292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: Mar 19, 2021 4:15:00 AM</p>
<p>Fix Resolution: 8.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10994</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/Jpeg2KDecode.c in Pillow before 7.1.0, there are multiple out-of-bounds reads via a crafted JP2 file.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10994>CVE-2020-10994</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10378</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In libImaging/PcxDecode.c in Pillow before 7.1.0, an out-of-bounds read can occur when reading PCX files where state->shuffle is instructed to read beyond state->buffer.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10378>CVE-2020-10378</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Jul 27, 2020 7:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-10177</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pillow before 7.1.0 has multiple out-of-bounds reads in libImaging/FliDecode.c.
<p>Publish Date: Jun 25, 2020 7:15:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10177>CVE-2020-10177</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: Aug 8, 2020 8:15:00 PM</p>
<p>Fix Resolution: 7.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-35655</summary>
### Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /easybuggy4django-41</p>
<p>Path to vulnerable library: /easybuggy4django-41</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/easybuggy4django-41/commit/38a3155da23d81cc9375f9627133f9556f58a9ad">38a3155da23d81cc9375f9627133f9556f58a9ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled.
<p>Publish Date: Jan 12, 2021 9:15:00 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-35655>CVE-2020-35655</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.4</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p>
<p>Release Date: Jan 12, 2021 9:15:00 AM</p>
<p>Fix Resolution: 8.1.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_reli | pillow tar gz vulnerabilities highest severity is vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library found in head commit a href vulnerabilities cve severity cvss dependency type fixed in pillow version remediation available high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct high pillow tar gz direct medium pillow tar gz direct medium pillow tar gz direct medium pillow tar gz direct medium pillow tar gz direct medium pillow tar gz direct details cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before tiffdecode has a heap based buffer overflow when decoding crafted ycbcr files because of certain interpretation conflicts with libtiff in rgba mode note this issue exists because of an incomplete fix for cve publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details libimaging sgirledecode c in pillow before has an sgi buffer overflow publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details libimaging pcxdecode c in pillow before has a pcx p mode buffer overflow publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details libimaging tiffdecode c in pillow before has a tiff decoding integer overflow related to realloc publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date jan am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in libimaging sgirledecode c in pillow through a number of out of bounds reads exist in the parsing of sgi image files a different issue than cve publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in pillow before there are two buffer overflows in libimaging tiffdecode c publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details there is a dos vulnerability in pillow before caused by fpximageplugin py calling the range function on an unvalidated bit integer if the number of bands is large on windows running bit python this results in an overflowerror or memoryerror due to the gb limit however on linux running bit python this results in the process being terminated by the oom killer publish date jan pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date aug pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details pillow before allows attackers to cause a denial of service memory consumption because the reported size of a contained image is not properly checked for an ico container and thus an attempted memory allocation can be very large publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before when reading specially crafted invalid image files the library can either allocate very large amounts of memory or take an extremely long period of time to process the image publish date oct pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date feb pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before in tiffdecode c there is a negative offset memcpy with an invalid size publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before in tiffdecode c there is an out of bounds read in tiffreadrgbatile via invalid tile boundaries publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before there is an out of bounds read in sgirledecode c publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details pillow before allows attackers to cause a denial of service memory consumption because the reported size of a contained image is not properly checked for a blp container and thus an attempted memory allocation can be very large whitesource note after conducting further research whitesource has determined that all versions of pillow up to version are vulnerable to cve publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details pillow before allows attackers to cause a denial of service memory consumption because the reported size of a contained image is not properly checked for an icns container and thus an attempted memory allocation can be very large publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in pillow before pcxdecode has a buffer over read when decoding a crafted pcx file because the user supplied stride value is trusted for buffer calculations publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date jan am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details libimaging flidecode c in pillow before has an fli buffer overflow publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date feb pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in pillow before the pdf parser allows a regular expression dos redos attack via a crafted pdf file because of a catastrophic backtracking regex publish date mar am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date mar am fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in libimaging c in pillow before there are multiple out of bounds reads via a crafted file publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in libimaging pcxdecode c in pillow before an out of bounds read can occur when reading pcx files where state shuffle is instructed to read beyond state buffer publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date jul pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details pillow before has multiple out of bounds reads in libimaging flidecode c publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date aug pm fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file path to vulnerable library dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in pillow before sgirledecode has a byte buffer over read when decoding crafted sgi rle image files because offsets and length tables are mishandled publish date jan am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date jan am fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
398 | 7,296,143,068 | IssuesEvent | 2018-02-26 09:47:31 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Pod will be created again and again when we have not enough cpu. | area/reliability lifecycle/stale sig/node | ## Pod will be created again and again when we have not enough cpu and will not be delete.
#### version
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
```
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
```
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
[root@iZbp14tmy66i2l0ln0vwreZ ~]# docker info
Containers: 16
Running: 14
Paused: 0
Stopped: 2
Images: 16
Server Version: 1.12.3
Storage Driver: devicemapper
Pool Name: docker-253:1-405252-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 2.742 GB
Data Space Total: 107.4 GB
Data Space Available: 29.22 GB
Metadata Space Used: 3.6 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge overlay host null
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.703 GiB
Name: iZbp14tmy66i2l0ln0vwreZ
ID: G3CE:GXII:N7FQ:CA27:AIZT:5MRI:GD2M:T4WU:MVAQ:E4VP:SFMX:5R6E
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
```
#### see output
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubectl --namespace=kube-system get po
NAME READY STATUS RESTARTS AGE
dummy-2340867639-qmggs 1/1 Running 0 55s
etcd-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-apiserver-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-controller-manager-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-discovery-2798764060-4r96g 1/1 Running 0 52s
kube-dns-3611717927-038mq 0/4 OutOfcpu 0 21s
kube-dns-3611717927-14vn6 0/4 OutOfcpu 0 3s
kube-dns-3611717927-18xhh 0/4 OutOfcpu 0 27s
kube-dns-3611717927-1th2f 0/4 OutOfcpu 0 2s
kube-dns-3611717927-21mvg 0/4 OutOfcpu 0 28s
kube-dns-3611717927-2c1ln 0/4 OutOfcpu 0 23s
kube-dns-3611717927-2m25g 0/4 OutOfcpu 0 1s
kube-dns-3611717927-302st 0/4 OutOfcpu 0 27s
kube-dns-3611717927-34fc2 0/4 OutOfcpu 0 5s
kube-dns-3611717927-3kl3k 0/4 OutOfcpu 0 40s
kube-dns-3611717927-3lvzk 0/4 OutOfcpu 0 7s
kube-dns-3611717927-4cpjq 0/4 OutOfcpu 0 19s
kube-dns-3611717927-501ms 0/4 OutOfcpu 0 15s
kube-dns-3611717927-59t74 0/4 OutOfcpu 0 17s
kube-dns-3611717927-5jcgc 0/4 OutOfcpu 0 50s
kube-dns-3611717927-5sl2p 0/4 OutOfcpu 0 5s
kube-dns-3611717927-66l75 0/4 OutOfcpu 0 29s
kube-dns-3611717927-697wm 0/4 OutOfcpu 0 32s
kube-dns-3611717927-6xxmd 0/4 OutOfcpu 0 18s
kube-dns-3611717927-7pw27 0/4 OutOfcpu 0 25s
kube-dns-3611717927-7xhvx 0/4 OutOfcpu 0 41s
kube-dns-3611717927-8nvc4 0/4 OutOfcpu 0 36s
kube-dns-3611717927-8p2t3 0/4 OutOfcpu 0 26s
kube-dns-3611717927-8wxwv 0/4 OutOfcpu 0 50s
kube-dns-3611717927-978x6 0/4 OutOfcpu 0 11s
kube-dns-3611717927-9f71g 0/4 OutOfcpu 0 34s
kube-dns-3611717927-9mr86 0/4 OutOfcpu 0 21s
kube-dns-3611717927-9mthz 0/4 OutOfcpu 0 24s
kube-dns-3611717927-bjwrz 0/4 OutOfcpu 0 11s
kube-dns-3611717927-bw062 0/4 OutOfcpu 0 28s
kube-dns-3611717927-c7038 0/4 OutOfcpu 0 47s
kube-dns-3611717927-cgwdk 0/4 OutOfcpu 0 44s
kube-dns-3611717927-dkdn9 0/4 OutOfcpu 0 33s
kube-dns-3611717927-dtg6n 0/4 OutOfcpu 0 43s
kube-dns-3611717927-dz7dd 0/4 Pending 0 0s
kube-dns-3611717927-f33bg 0/4 OutOfcpu 0 7s
kube-dns-3611717927-f753b 0/4 OutOfcpu 0 44s
kube-dns-3611717927-ff2cs 0/4 OutOfcpu 0 31s
kube-dns-3611717927-fvbn9 0/4 OutOfcpu 0 19s
kube-dns-3611717927-j9323 0/4 OutOfcpu 0 31s
kube-dns-3611717927-j991n 0/4 OutOfcpu 0 29s
kube-dns-3611717927-jgzbp 0/4 OutOfcpu 0 13s
kube-dns-3611717927-l2g0t 0/4 OutOfcpu 0 34s
kube-dns-3611717927-l4t6t 0/4 OutOfcpu 0 10s
kube-dns-3611717927-lf63x 0/4 OutOfcpu 0 8s
kube-dns-3611717927-lhj3n 0/4 OutOfcpu 0 6s
kube-dns-3611717927-lw0rk 0/4 OutOfcpu 0 25s
kube-dns-3611717927-pbk96 0/4 OutOfcpu 0 38s
kube-dns-3611717927-pj8d7 0/4 OutOfcpu 0 42s
kube-dns-3611717927-pmmmt 0/4 OutOfcpu 0 35s
kube-dns-3611717927-pvxvw 0/4 OutOfcpu 0 9s
kube-dns-3611717927-pxm54 0/4 OutOfcpu 0 16s
kube-dns-3611717927-qhq5p 0/4 OutOfcpu 0 30s
kube-dns-3611717927-rwl3l 0/4 OutOfcpu 0 12s
kube-dns-3611717927-s1b8g 0/4 OutOfcpu 0 33s
kube-dns-3611717927-s7wmp 0/4 OutOfcpu 0 20s
kube-dns-3611717927-s9127 0/4 OutOfcpu 0 50s
kube-dns-3611717927-sffb4 0/4 OutOfcpu 0 42s
kube-dns-3611717927-sfsg5 0/4 OutOfcpu 0 18s
kube-dns-3611717927-sqwfw 0/4 OutOfcpu 0 10s
kube-dns-3611717927-stk1f 0/4 OutOfcpu 0 14s
kube-dns-3611717927-t1fdw 0/4 OutOfcpu 0 45s
kube-dns-3611717927-tcg2r 0/4 OutOfcpu 0 37s
kube-dns-3611717927-tf7w9 0/4 OutOfcpu 0 39s
kube-dns-3611717927-tk2zf 0/4 OutOfcpu 0 22s
kube-dns-3611717927-tkxjl 0/4 OutOfcpu 0 39s
kube-dns-3611717927-v4c5w 0/4 OutOfcpu 0 17s
kube-dns-3611717927-vhft0 0/4 OutOfcpu 0 49s
kube-dns-3611717927-wdgrd 0/4 OutOfcpu 0 9s
kube-dns-3611717927-x6hqb 0/4 OutOfcpu 0 36s
kube-dns-3611717927-xgnm3 0/4 OutOfcpu 0 49s
kube-dns-3611717927-xmh0l 0/4 OutOfcpu 0 15s
kube-dns-3611717927-xmp5c 0/4 OutOfcpu 0 4s
kube-dns-3611717927-xrlvb 0/4 OutOfcpu 0 23s
kube-proxy-q8xf9 1/1 Running 0 50s
kubernetes-dashboard-3761021483-6xl56 0/1 ContainerCreating 0 49s
``` | True | Pod will be created again and again when we have not enough cpu. - ## Pod will be created again and again when we have not enough cpu and will not be delete.
#### version
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
```
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
```
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
[root@iZbp14tmy66i2l0ln0vwreZ ~]# docker info
Containers: 16
Running: 14
Paused: 0
Stopped: 2
Images: 16
Server Version: 1.12.3
Storage Driver: devicemapper
Pool Name: docker-253:1-405252-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 2.742 GB
Data Space Total: 107.4 GB
Data Space Available: 29.22 GB
Metadata Space Used: 3.6 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge overlay host null
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.703 GiB
Name: iZbp14tmy66i2l0ln0vwreZ
ID: G3CE:GXII:N7FQ:CA27:AIZT:5MRI:GD2M:T4WU:MVAQ:E4VP:SFMX:5R6E
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
```
#### see output
```
[root@iZbp14tmy66i2l0ln0vwreZ ~]# kubectl --namespace=kube-system get po
NAME READY STATUS RESTARTS AGE
dummy-2340867639-qmggs 1/1 Running 0 55s
etcd-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-apiserver-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-controller-manager-izbp14tmy66i2l0ln0vwrez 1/1 Running 0 1m
kube-discovery-2798764060-4r96g 1/1 Running 0 52s
kube-dns-3611717927-038mq 0/4 OutOfcpu 0 21s
kube-dns-3611717927-14vn6 0/4 OutOfcpu 0 3s
kube-dns-3611717927-18xhh 0/4 OutOfcpu 0 27s
kube-dns-3611717927-1th2f 0/4 OutOfcpu 0 2s
kube-dns-3611717927-21mvg 0/4 OutOfcpu 0 28s
kube-dns-3611717927-2c1ln 0/4 OutOfcpu 0 23s
kube-dns-3611717927-2m25g 0/4 OutOfcpu 0 1s
kube-dns-3611717927-302st 0/4 OutOfcpu 0 27s
kube-dns-3611717927-34fc2 0/4 OutOfcpu 0 5s
kube-dns-3611717927-3kl3k 0/4 OutOfcpu 0 40s
kube-dns-3611717927-3lvzk 0/4 OutOfcpu 0 7s
kube-dns-3611717927-4cpjq 0/4 OutOfcpu 0 19s
kube-dns-3611717927-501ms 0/4 OutOfcpu 0 15s
kube-dns-3611717927-59t74 0/4 OutOfcpu 0 17s
kube-dns-3611717927-5jcgc 0/4 OutOfcpu 0 50s
kube-dns-3611717927-5sl2p 0/4 OutOfcpu 0 5s
kube-dns-3611717927-66l75 0/4 OutOfcpu 0 29s
kube-dns-3611717927-697wm 0/4 OutOfcpu 0 32s
kube-dns-3611717927-6xxmd 0/4 OutOfcpu 0 18s
kube-dns-3611717927-7pw27 0/4 OutOfcpu 0 25s
kube-dns-3611717927-7xhvx 0/4 OutOfcpu 0 41s
kube-dns-3611717927-8nvc4 0/4 OutOfcpu 0 36s
kube-dns-3611717927-8p2t3 0/4 OutOfcpu 0 26s
kube-dns-3611717927-8wxwv 0/4 OutOfcpu 0 50s
kube-dns-3611717927-978x6 0/4 OutOfcpu 0 11s
kube-dns-3611717927-9f71g 0/4 OutOfcpu 0 34s
kube-dns-3611717927-9mr86 0/4 OutOfcpu 0 21s
kube-dns-3611717927-9mthz 0/4 OutOfcpu 0 24s
kube-dns-3611717927-bjwrz 0/4 OutOfcpu 0 11s
kube-dns-3611717927-bw062 0/4 OutOfcpu 0 28s
kube-dns-3611717927-c7038 0/4 OutOfcpu 0 47s
kube-dns-3611717927-cgwdk 0/4 OutOfcpu 0 44s
kube-dns-3611717927-dkdn9 0/4 OutOfcpu 0 33s
kube-dns-3611717927-dtg6n 0/4 OutOfcpu 0 43s
kube-dns-3611717927-dz7dd 0/4 Pending 0 0s
kube-dns-3611717927-f33bg 0/4 OutOfcpu 0 7s
kube-dns-3611717927-f753b 0/4 OutOfcpu 0 44s
kube-dns-3611717927-ff2cs 0/4 OutOfcpu 0 31s
kube-dns-3611717927-fvbn9 0/4 OutOfcpu 0 19s
kube-dns-3611717927-j9323 0/4 OutOfcpu 0 31s
kube-dns-3611717927-j991n 0/4 OutOfcpu 0 29s
kube-dns-3611717927-jgzbp 0/4 OutOfcpu 0 13s
kube-dns-3611717927-l2g0t 0/4 OutOfcpu 0 34s
kube-dns-3611717927-l4t6t 0/4 OutOfcpu 0 10s
kube-dns-3611717927-lf63x 0/4 OutOfcpu 0 8s
kube-dns-3611717927-lhj3n 0/4 OutOfcpu 0 6s
kube-dns-3611717927-lw0rk 0/4 OutOfcpu 0 25s
kube-dns-3611717927-pbk96 0/4 OutOfcpu 0 38s
kube-dns-3611717927-pj8d7 0/4 OutOfcpu 0 42s
kube-dns-3611717927-pmmmt 0/4 OutOfcpu 0 35s
kube-dns-3611717927-pvxvw 0/4 OutOfcpu 0 9s
kube-dns-3611717927-pxm54 0/4 OutOfcpu 0 16s
kube-dns-3611717927-qhq5p 0/4 OutOfcpu 0 30s
kube-dns-3611717927-rwl3l 0/4 OutOfcpu 0 12s
kube-dns-3611717927-s1b8g 0/4 OutOfcpu 0 33s
kube-dns-3611717927-s7wmp 0/4 OutOfcpu 0 20s
kube-dns-3611717927-s9127 0/4 OutOfcpu 0 50s
kube-dns-3611717927-sffb4 0/4 OutOfcpu 0 42s
kube-dns-3611717927-sfsg5 0/4 OutOfcpu 0 18s
kube-dns-3611717927-sqwfw 0/4 OutOfcpu 0 10s
kube-dns-3611717927-stk1f 0/4 OutOfcpu 0 14s
kube-dns-3611717927-t1fdw 0/4 OutOfcpu 0 45s
kube-dns-3611717927-tcg2r 0/4 OutOfcpu 0 37s
kube-dns-3611717927-tf7w9 0/4 OutOfcpu 0 39s
kube-dns-3611717927-tk2zf 0/4 OutOfcpu 0 22s
kube-dns-3611717927-tkxjl 0/4 OutOfcpu 0 39s
kube-dns-3611717927-v4c5w 0/4 OutOfcpu 0 17s
kube-dns-3611717927-vhft0 0/4 OutOfcpu 0 49s
kube-dns-3611717927-wdgrd 0/4 OutOfcpu 0 9s
kube-dns-3611717927-x6hqb 0/4 OutOfcpu 0 36s
kube-dns-3611717927-xgnm3 0/4 OutOfcpu 0 49s
kube-dns-3611717927-xmh0l 0/4 OutOfcpu 0 15s
kube-dns-3611717927-xmp5c 0/4 OutOfcpu 0 4s
kube-dns-3611717927-xrlvb 0/4 OutOfcpu 0 23s
kube-proxy-q8xf9 1/1 Running 0 50s
kubernetes-dashboard-3761021483-6xl56 0/1 ContainerCreating 0 49s
``` | reli | pod will be created again and again when we have not enough cpu pod will be created again and again when we have not enough cpu and will not be delete version kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux kubeadm version kubeadm version version info major minor gitversion alpha gitcommit gittreestate clean builddate goversion compiler gc platform linux docker version client version api version go version git commit built os arch linux server version api version go version git commit built os arch linux docker info containers running paused stopped images server version storage driver devicemapper pool name docker pool pool blocksize kb base device size gb backing filesystem xfs data file dev metadata file dev data space used gb data space total gb data space available gb metadata space used mb metadata space total gb metadata space available gb thin pool minimum free space gb udev sync supported true deferred removal enabled false deferred deletion enabled false deferred deleted device count data loop file var lib docker devicemapper devicemapper data warning usage of loopback devices is strongly discouraged for production use use storage opt dm thinpooldev to specify a custom block storage device metadata loop file var lib docker devicemapper devicemapper metadata library version logging driver json file cgroup driver cgroupfs plugins volume local network bridge overlay host null swarm inactive runtimes runc default runtime runc security options seccomp kernel version operating system centos linux core ostype linux architecture cpus total memory gib name id gxii aizt mvaq sfmx docker root dir var lib docker debug mode client false debug mode server false registry warning bridge nf call is disabled insecure registries see output kubectl namespace kube system get po name ready status restarts age dummy qmggs running etcd running kube apiserver running kube controller manager running kube discovery running kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns bjwrz outofcpu kube dns outofcpu kube dns outofcpu kube dns cgwdk outofcpu kube dns outofcpu kube dns outofcpu kube dns pending kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns jgzbp outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns pmmmt outofcpu kube dns pvxvw outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns sqwfw outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns tkxjl outofcpu kube dns outofcpu kube dns outofcpu kube dns wdgrd outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns outofcpu kube dns xrlvb outofcpu kube proxy running kubernetes dashboard containercreating | 1 |
580 | 8,701,780,659 | IssuesEvent | 2018-12-05 12:37:37 | Cha-OS/colabo | https://api.github.com/repos/Cha-OS/colabo | opened | data preserving and reusage - bandwidth issue | IMPORTANT enhancement performance refactoring reliability | as I've done in
`colabo/src/frontend/dev_puzzles/rima/aaa/rima-aaa.service.ts:: getUsersInActiveMap`
we can use previously loaded data again for different components | True | data preserving and reusage - bandwidth issue - as I've done in
`colabo/src/frontend/dev_puzzles/rima/aaa/rima-aaa.service.ts:: getUsersInActiveMap`
we can use previously loaded data again for different components | reli | data preserving and reusage bandwidth issue as i ve done in colabo src frontend dev puzzles rima aaa rima aaa service ts getusersinactivemap we can use previously loaded data again for different components | 1 |
2,942 | 30,476,993,949 | IssuesEvent | 2023-07-17 17:15:48 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Issue in analyzer callbacks for nested actions in symbol start analyzers | Bug Area-Analyzers Tenet-Reliability untriaged | Issue found by the unit test skipped in https://github.com/dotnet/roslyn/pull/68385/commits/2e0f6b90a568b1e0033f9ed785ff22c8a8fe96af.
Core issue is as follows:
1. Create a symbol start analyzer that registers nested operation actions for code within named types
2. Register symbol end action for the outer type, but not for the nested type
3. Race condition: Symbol end action callback for the outer type gets invoked before the nested operation callback for executable code within the nested type.
Note that the above does not repro if the analyzer registers a symbol end action also for the nested type. | True | Issue in analyzer callbacks for nested actions in symbol start analyzers - Issue found by the unit test skipped in https://github.com/dotnet/roslyn/pull/68385/commits/2e0f6b90a568b1e0033f9ed785ff22c8a8fe96af.
Core issue is as follows:
1. Create a symbol start analyzer that registers nested operation actions for code within named types
2. Register symbol end action for the outer type, but not for the nested type
3. Race condition: Symbol end action callback for the outer type gets invoked before the nested operation callback for executable code within the nested type.
Note that the above does not repro if the analyzer registers a symbol end action also for the nested type. | reli | issue in analyzer callbacks for nested actions in symbol start analyzers issue found by the unit test skipped in core issue is as follows create a symbol start analyzer that registers nested operation actions for code within named types register symbol end action for the outer type but not for the nested type race condition symbol end action callback for the outer type gets invoked before the nested operation callback for executable code within the nested type note that the above does not repro if the analyzer registers a symbol end action also for the nested type | 1 |
775 | 10,476,332,464 | IssuesEvent | 2019-09-23 18:21:21 | microsoft/BotFramework-DirectLineJS | https://api.github.com/repos/microsoft/BotFramework-DirectLineJS | opened | Unhappy path: resume interruption, after token is renewed | 0 Reliability 0 Streaming Extensions | > This can be folded into the previous "expired token" test, but must be clearly separated for their test expectations.
1. Start a conversation
1. Kill the connection by letting the token expire
1. Renew the token
1. Resume the conversation with same conversation ID and renewed token
Make sure:
- The bot and client can communicate with each other both ways. | True | Unhappy path: resume interruption, after token is renewed - > This can be folded into the previous "expired token" test, but must be clearly separated for their test expectations.
1. Start a conversation
1. Kill the connection by letting the token expire
1. Renew the token
1. Resume the conversation with same conversation ID and renewed token
Make sure:
- The bot and client can communicate with each other both ways. | reli | unhappy path resume interruption after token is renewed this can be folded into the previous expired token test but must be clearly separated for their test expectations start a conversation kill the connection by letting the token expire renew the token resume the conversation with same conversation id and renewed token make sure the bot and client can communicate with each other both ways | 1 |
8,890 | 12,391,646,360 | IssuesEvent | 2020-05-20 12:50:39 | loic-lopez/UMVC | https://api.github.com/repos/loic-lopez/UMVC | closed | Improve code coverage on UMVC.Editor | Requirement UMVC Editor enhancement | Improve code coverage by adding tests to:
- [x] CreateSettingsWindow
- [x] CreateMVCWindow
- [x] Singleton.UMVC
- [x] UMVC.Editor.Extensions namespace
- [x] Asset.cs
- [x] WindowsManager.cs
- [x] Abstracts/Window.cs
See uncovered files at: [codecov.io/loic-lopez/UMVC/Editor](https://codecov.io/gh/loic-lopez/UMVC/tree/master/Assets/UMVC/Editor) | 1.0 | Improve code coverage on UMVC.Editor - Improve code coverage by adding tests to:
- [x] CreateSettingsWindow
- [x] CreateMVCWindow
- [x] Singleton.UMVC
- [x] UMVC.Editor.Extensions namespace
- [x] Asset.cs
- [x] WindowsManager.cs
- [x] Abstracts/Window.cs
See uncovered files at: [codecov.io/loic-lopez/UMVC/Editor](https://codecov.io/gh/loic-lopez/UMVC/tree/master/Assets/UMVC/Editor) | non_reli | improve code coverage on umvc editor improve code coverage by adding tests to createsettingswindow createmvcwindow singleton umvc umvc editor extensions namespace asset cs windowsmanager cs abstracts window cs see uncovered files at | 0 |
212,072 | 7,228,125,457 | IssuesEvent | 2018-02-11 05:16:25 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | How to make CNI work with redhat/openshift-ovs-multitenant? | component/networking kind/question lifecycle/stale priority/P3 | I'm trying to configure a cluster manually (not using Ansible) as I described in https://github.com/openshift/origin/issues/13959. But I didn't found how to configure the network for communication between master and nodes.
Can anyone help me about how to configure a network plugin in Origin?
##### Version
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
etcd 3.1.0
CentOS 7
Docker 17.03.1-ce
| 1.0 | How to make CNI work with redhat/openshift-ovs-multitenant? - I'm trying to configure a cluster manually (not using Ansible) as I described in https://github.com/openshift/origin/issues/13959. But I didn't found how to configure the network for communication between master and nodes.
Can anyone help me about how to configure a network plugin in Origin?
##### Version
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4
etcd 3.1.0
CentOS 7
Docker 17.03.1-ce
| non_reli | how to make cni work with redhat openshift ovs multitenant i m trying to configure a cluster manually not using ansible as i described in but i didn t found how to configure the network for communication between master and nodes can anyone help me about how to configure a network plugin in origin version openshift kubernetes etcd centos docker ce | 0 |
641,277 | 20,823,094,221 | IssuesEvent | 2022-03-18 17:23:52 | googleapis/nodejs-logging-winston | https://api.github.com/repos/googleapis/nodejs-logging-winston | closed | Add support to print structured logging to STDOUT | priority: p2 type: feature request api: logging | **Is your feature request related to a problem? Please describe.**
There are problems reported by users about inability to flush logging data in serverless environments like Cloud Functions reported in [598](https://github.com/googleapis/nodejs-logging-winston/issues/598).
**Describe the solution you'd like**
Add support to print structured logging to STDOUT.
**Describe alternatives you've considered**
There are no great alternatives - giving a fact that serverless execution considered as short living, there are more possibility for failures to send logs during function/process termination. Integrating with Google Cloud Agents through STDOUT could reduce logging loss | 1.0 | Add support to print structured logging to STDOUT - **Is your feature request related to a problem? Please describe.**
There are problems reported by users about inability to flush logging data in serverless environments like Cloud Functions reported in [598](https://github.com/googleapis/nodejs-logging-winston/issues/598).
**Describe the solution you'd like**
Add support to print structured logging to STDOUT.
**Describe alternatives you've considered**
There are no great alternatives - giving a fact that serverless execution considered as short living, there are more possibility for failures to send logs during function/process termination. Integrating with Google Cloud Agents through STDOUT could reduce logging loss | non_reli | add support to print structured logging to stdout is your feature request related to a problem please describe there are problems reported by users about inability to flush logging data in serverless environments like cloud functions reported in describe the solution you d like add support to print structured logging to stdout describe alternatives you ve considered there are no great alternatives giving a fact that serverless execution considered as short living there are more possibility for failures to send logs during function process termination integrating with google cloud agents through stdout could reduce logging loss | 0 |
2,459 | 25,525,379,070 | IssuesEvent | 2022-11-29 01:28:45 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | [FEA] scans should log what file(s) caused an exception in cuDF | good first issue reliability | When cuDF fails to process a file (or files in the coalescing/cloud readers) we don't always know what file we need to look into easily.
Our readers should try/catch calls to cuDF and log the original exception from cuDF but also add what file(s) were being read and any other pertinent metadata (I'd like to see file size for example, or specifics of the file format that are available at the time of the cuDF call). | True | [FEA] scans should log what file(s) caused an exception in cuDF - When cuDF fails to process a file (or files in the coalescing/cloud readers) we don't always know what file we need to look into easily.
Our readers should try/catch calls to cuDF and log the original exception from cuDF but also add what file(s) were being read and any other pertinent metadata (I'd like to see file size for example, or specifics of the file format that are available at the time of the cuDF call). | reli | scans should log what file s caused an exception in cudf when cudf fails to process a file or files in the coalescing cloud readers we don t always know what file we need to look into easily our readers should try catch calls to cudf and log the original exception from cudf but also add what file s were being read and any other pertinent metadata i d like to see file size for example or specifics of the file format that are available at the time of the cudf call | 1 |
128 | 4,122,253,252 | IssuesEvent | 2016-06-09 00:57:21 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | SIGSEGV_libcoreclr.so!AllocateObject | blocking-release bug GC reliability | **The notes in this bug refer to the Ubuntu.14.04 dump [wj3wzs5j](https://rapreqs.blob.core.windows.net/bryanar/BodyPart_4fa8883e-f6d9-47b1-9c02-2d1ce6156ddb?sv=2015-04-05&sr=b&sig=UZCJEECgbzT09rj4horT5AyCQWb%2FtHO87O0J7euBYbg%3D&st=2016-05-20T21%3A11%3A44Z&se=2017-05-20T21%3A11%3A44Z&sp=r). Other dumps are available if needed. To repro this issue you will likely have to disaable the oom killer, and dissable memory overcommit using the following directions**
Append these lines to /etc/sysctl.conf:
vm.oom-kill = 0
vm.overcommit_memory = 2
**The issue here appears to be actually rooted in libcoreclr.so`WKS::GCHeap::Alloc. It seems that GCHeap::Alloc is failing due to an OOM, however it does not return NULL after failing to allocate, instead it appears to return a bogus address which is not actually in the address space of the process. This in turn causes calling code to SEGSIV**
**Looking at the dump we can see we are in an EH path which is trying to allocate an exception. This calls into libcoreclr.so!AllocateObject which fails due to a SEGSIV**
STOP_REASON:
SIGSEGV
FAULT_SYMBOL:
libcoreclr.so!AllocateObject
FAILURE_HASH:
SIGSEGV_libcoreclr.so!AllocateObject
FAULT_STACK:
libcoreclr.so!AllocateObject(MethodTable*)
libcoreclr.so!EEException::CreateThrowable()
libcoreclr.so!CreateCOMPlusExceptionObject(Thread*, _EXCEPTION_RECORD*, int)
libcoreclr.so!ExceptionTracker::GetOrCreateTracker(unsigned long, StackFrame, _EXCEPTION_RECORD*, _CONTEXT*, int, bool, ExceptionTracker::StackTraceState*)
libcoreclr.so!ProcessCLRException
libcoreclr.so!UnwindManagedExceptionPass1(PAL_SEHException&, _CONTEXT*)
libcoreclr.so!DispatchManagedException(PAL_SEHException&)
libcoreclr.so!HandleHardwareException(PAL_SEHException*)
libcoreclr.so!SEHProcessException(_EXCEPTION_POINTERS*)
libcoreclr.so!common_signal_handler(int, siginfo_t*, void*, int, ...)
libcoreclr.so!sigsegv_handler(int, siginfo_t*, void*)
libclrjit.so!sigsegv_handler(int, siginfo_t*, void*)
libpthread.so.0!__lll_unlock_wake
mscorlib.ni.dll!System.Reflection.Emit.DynamicMethod.GetMethodDescriptor()
mscorlib.ni.dll!System.Reflection.Emit.DynamicMethod.CreateDelegate(System.Type, System.Object)
System.Linq.Expressions.dll!System.Linq.Expressions.Compiler.LambdaCompiler.Compile(System.Linq.Expressions.LambdaExpression)
System.Linq.Expressions.dll!System.Linq.Expressions.Expression`1[[System.__Canon, mscorlib]].Compile(Boolean)
System.Linq.Queryable.dll!System.Linq.EnumerableExecutor`1[[System.Int32, mscorlib]].Execute()
System.Linq.Queryable.dll!DomainBoundILStubClass.IL_STUB_InstantiatingStub(System.Linq.Expressions.Expression)
System.Linq.Queryable.dll!System.Linq.Queryable.Count[[System.__Canon, mscorlib]](System.Linq.IQueryable`1<System.__Canon>)
System.Linq.Queryable.Tests.dll!System.Linq.Tests.GroupByTests.GroupBy3()
24117-00_0003.exe!stress.generated.UnitTests.UT5C()
stress.execution.dll!stress.execution.UnitTest.Execute()
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy.RunWorker(stress.execution.ITestPattern, System.Threading.CancellationToken)
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy+<>c__DisplayClass1_0.<SpawnWorker>b__0()
mscorlib.ni.dll!System.Threading.Tasks.Task.Execute()
mscorlib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
mscorlib.ni.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
mscorlib.ni.dll!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
mscorlib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
FAULT_THREAD:
thread #1: tid = 9146, 0x00007ff7fe103318 libcoreclr.so`AllocateObject(MethodTable*) + 184, name = 'corerun', stop reason = signal SIGSEGV
LAST_EXCEPTION:
There is no current managed exception on this thread
**Dissasembling libcoreclr.so!AllocateObject shows that we are calling into libcoreclr.so!WKS::GCHeap::Alloc which actually returns a non NULL address in (stored in r15). However the address doesn't appear to be in the address space of the process, so when we derefernce it we SEGSIV.**
(lldb) disassemble -F intel
libcoreclr.so`AllocateObject:
0x7ff7fe103260 <+0>: push rbp
0x7ff7fe103261 <+1>: mov rbp, rsp
0x7ff7fe103264 <+4>: push r15
0x7ff7fe103266 <+6>: push r14
0x7ff7fe103268 <+8>: push r13
0x7ff7fe10326a <+10>: push r12
0x7ff7fe10326c <+12>: push rbx
0x7ff7fe10326d <+13>: sub rsp, 0x48
0x7ff7fe103271 <+17>: mov r14, rdi
0x7ff7fe103274 <+20>: mov rax, qword ptr fs:[0x28]
0x7ff7fe10327d <+29>: mov qword ptr [rbp - 0x30], rax
0x7ff7fe103281 <+33>: lea rax, [rip + 0x7012a8] ; g_IBCLogger
0x7ff7fe103288 <+40>: cmp dword ptr [rax], 0x0
0x7ff7fe10328b <+43>: je 0x7ff7fe103295 ; <+53>
0x7ff7fe10328d <+45>: mov rdi, r14
0x7ff7fe103290 <+48>: call 0x7ff7fe0262f0 ; IBCLogger::LogMethodTableAccessStatic(void const*)
0x7ff7fe103295 <+53>: lea rdi, [rip + 0x6f0be3] ; + 296
0x7ff7fe10329d <+61>: call 0x7ff7fdfe56e0 ; symbol stub for: __tls_get_addr
0x7ff7fe1032a5 <+69>: mov rax, qword ptr [rax]
0x7ff7fe1032a8 <+72>: mov qword ptr [rax + 0x98], r14
0x7ff7fe1032af <+79>: mov ebx, dword ptr [r14]
0x7ff7fe1032b2 <+82>: test ebx, 0x8000000
0x7ff7fe1032b8 <+88>: je 0x7ff7fe1032c7 ; <+103>
0x7ff7fe1032ba <+90>: xor esi, esi
0x7ff7fe1032bc <+92>: mov rdi, r14
0x7ff7fe1032bf <+95>: call 0x7ff7fe0ea280 ; PrepareCriticalFinalizerObject(MethodTable*, Module*)
0x7ff7fe1032c4 <+100>: mov ebx, dword ptr [r14]
0x7ff7fe1032c7 <+103>: mov r12d, dword ptr [r14 + 0x4]
0x7ff7fe1032cb <+107>: lea rax, [rip + 0x6fbab6] ; g_pGCHeap
0x7ff7fe1032d2 <+114>: mov r15, qword ptr [rax]
0x7ff7fe1032d5 <+117>: mov eax, ebx
0x7ff7fe1032d7 <+119>: shr eax, 0x17
0x7ff7fe1032da <+122>: and eax, 0x2
0x7ff7fe1032dd <+125>: shr ebx, 0x14
0x7ff7fe1032e0 <+128>: and ebx, 0x1
0x7ff7fe1032e3 <+131>: or ebx, eax
0x7ff7fe1032e5 <+133>: mov r13, qword ptr [r15]
0x7ff7fe1032e8 <+136>: lea rdi, [rip + 0x6f0b90] ; + 296
0x7ff7fe1032f0 <+144>: call 0x7ff7fdfe56e0 ; symbol stub for: __tls_get_addr
0x7ff7fe1032f8 <+152>: mov rsi, qword ptr [rax]
0x7ff7fe1032fb <+155>: add rsi, 0x60
0x7ff7fe1032ff <+159>: mov rdi, r15
0x7ff7fe103302 <+162>: mov rdx, r12
0x7ff7fe103305 <+165>: mov ecx, ebx
0x7ff7fe103307 <+167>: call qword ptr [r13 + 0x98] //CALL TO libcoreclr.so`WKS::GCHeap::Alloc
0x7ff7fe10330e <+174>: mov r15, rax //Return from libcoreclr.so`WKS::GCHeap::Alloc is stored in r15
0x7ff7fe103311 <+177>: cmp r12, 0x14c08
-> 0x7ff7fe103318 <+184>: mov qword ptr [r15], r14 //SEGSIV when dereferencing r15
(lldb) register read r15
r15 = 0x00007ff768d38068
(lldb) memory read -f x -s 8 -c 8 0x00007ff768d38068
error: core file does not contain 0x7ff768d38068 | True | SIGSEGV_libcoreclr.so!AllocateObject - **The notes in this bug refer to the Ubuntu.14.04 dump [wj3wzs5j](https://rapreqs.blob.core.windows.net/bryanar/BodyPart_4fa8883e-f6d9-47b1-9c02-2d1ce6156ddb?sv=2015-04-05&sr=b&sig=UZCJEECgbzT09rj4horT5AyCQWb%2FtHO87O0J7euBYbg%3D&st=2016-05-20T21%3A11%3A44Z&se=2017-05-20T21%3A11%3A44Z&sp=r). Other dumps are available if needed. To repro this issue you will likely have to disaable the oom killer, and dissable memory overcommit using the following directions**
Append these lines to /etc/sysctl.conf:
vm.oom-kill = 0
vm.overcommit_memory = 2
**The issue here appears to be actually rooted in libcoreclr.so`WKS::GCHeap::Alloc. It seems that GCHeap::Alloc is failing due to an OOM, however it does not return NULL after failing to allocate, instead it appears to return a bogus address which is not actually in the address space of the process. This in turn causes calling code to SEGSIV**
**Looking at the dump we can see we are in an EH path which is trying to allocate an exception. This calls into libcoreclr.so!AllocateObject which fails due to a SEGSIV**
STOP_REASON:
SIGSEGV
FAULT_SYMBOL:
libcoreclr.so!AllocateObject
FAILURE_HASH:
SIGSEGV_libcoreclr.so!AllocateObject
FAULT_STACK:
libcoreclr.so!AllocateObject(MethodTable*)
libcoreclr.so!EEException::CreateThrowable()
libcoreclr.so!CreateCOMPlusExceptionObject(Thread*, _EXCEPTION_RECORD*, int)
libcoreclr.so!ExceptionTracker::GetOrCreateTracker(unsigned long, StackFrame, _EXCEPTION_RECORD*, _CONTEXT*, int, bool, ExceptionTracker::StackTraceState*)
libcoreclr.so!ProcessCLRException
libcoreclr.so!UnwindManagedExceptionPass1(PAL_SEHException&, _CONTEXT*)
libcoreclr.so!DispatchManagedException(PAL_SEHException&)
libcoreclr.so!HandleHardwareException(PAL_SEHException*)
libcoreclr.so!SEHProcessException(_EXCEPTION_POINTERS*)
libcoreclr.so!common_signal_handler(int, siginfo_t*, void*, int, ...)
libcoreclr.so!sigsegv_handler(int, siginfo_t*, void*)
libclrjit.so!sigsegv_handler(int, siginfo_t*, void*)
libpthread.so.0!__lll_unlock_wake
mscorlib.ni.dll!System.Reflection.Emit.DynamicMethod.GetMethodDescriptor()
mscorlib.ni.dll!System.Reflection.Emit.DynamicMethod.CreateDelegate(System.Type, System.Object)
System.Linq.Expressions.dll!System.Linq.Expressions.Compiler.LambdaCompiler.Compile(System.Linq.Expressions.LambdaExpression)
System.Linq.Expressions.dll!System.Linq.Expressions.Expression`1[[System.__Canon, mscorlib]].Compile(Boolean)
System.Linq.Queryable.dll!System.Linq.EnumerableExecutor`1[[System.Int32, mscorlib]].Execute()
System.Linq.Queryable.dll!DomainBoundILStubClass.IL_STUB_InstantiatingStub(System.Linq.Expressions.Expression)
System.Linq.Queryable.dll!System.Linq.Queryable.Count[[System.__Canon, mscorlib]](System.Linq.IQueryable`1<System.__Canon>)
System.Linq.Queryable.Tests.dll!System.Linq.Tests.GroupByTests.GroupBy3()
24117-00_0003.exe!stress.generated.UnitTests.UT5C()
stress.execution.dll!stress.execution.UnitTest.Execute()
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy.RunWorker(stress.execution.ITestPattern, System.Threading.CancellationToken)
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy+<>c__DisplayClass1_0.<SpawnWorker>b__0()
mscorlib.ni.dll!System.Threading.Tasks.Task.Execute()
mscorlib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
mscorlib.ni.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
mscorlib.ni.dll!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
mscorlib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
FAULT_THREAD:
thread #1: tid = 9146, 0x00007ff7fe103318 libcoreclr.so`AllocateObject(MethodTable*) + 184, name = 'corerun', stop reason = signal SIGSEGV
LAST_EXCEPTION:
There is no current managed exception on this thread
**Dissasembling libcoreclr.so!AllocateObject shows that we are calling into libcoreclr.so!WKS::GCHeap::Alloc which actually returns a non NULL address in (stored in r15). However the address doesn't appear to be in the address space of the process, so when we derefernce it we SEGSIV.**
(lldb) disassemble -F intel
libcoreclr.so`AllocateObject:
0x7ff7fe103260 <+0>: push rbp
0x7ff7fe103261 <+1>: mov rbp, rsp
0x7ff7fe103264 <+4>: push r15
0x7ff7fe103266 <+6>: push r14
0x7ff7fe103268 <+8>: push r13
0x7ff7fe10326a <+10>: push r12
0x7ff7fe10326c <+12>: push rbx
0x7ff7fe10326d <+13>: sub rsp, 0x48
0x7ff7fe103271 <+17>: mov r14, rdi
0x7ff7fe103274 <+20>: mov rax, qword ptr fs:[0x28]
0x7ff7fe10327d <+29>: mov qword ptr [rbp - 0x30], rax
0x7ff7fe103281 <+33>: lea rax, [rip + 0x7012a8] ; g_IBCLogger
0x7ff7fe103288 <+40>: cmp dword ptr [rax], 0x0
0x7ff7fe10328b <+43>: je 0x7ff7fe103295 ; <+53>
0x7ff7fe10328d <+45>: mov rdi, r14
0x7ff7fe103290 <+48>: call 0x7ff7fe0262f0 ; IBCLogger::LogMethodTableAccessStatic(void const*)
0x7ff7fe103295 <+53>: lea rdi, [rip + 0x6f0be3] ; + 296
0x7ff7fe10329d <+61>: call 0x7ff7fdfe56e0 ; symbol stub for: __tls_get_addr
0x7ff7fe1032a5 <+69>: mov rax, qword ptr [rax]
0x7ff7fe1032a8 <+72>: mov qword ptr [rax + 0x98], r14
0x7ff7fe1032af <+79>: mov ebx, dword ptr [r14]
0x7ff7fe1032b2 <+82>: test ebx, 0x8000000
0x7ff7fe1032b8 <+88>: je 0x7ff7fe1032c7 ; <+103>
0x7ff7fe1032ba <+90>: xor esi, esi
0x7ff7fe1032bc <+92>: mov rdi, r14
0x7ff7fe1032bf <+95>: call 0x7ff7fe0ea280 ; PrepareCriticalFinalizerObject(MethodTable*, Module*)
0x7ff7fe1032c4 <+100>: mov ebx, dword ptr [r14]
0x7ff7fe1032c7 <+103>: mov r12d, dword ptr [r14 + 0x4]
0x7ff7fe1032cb <+107>: lea rax, [rip + 0x6fbab6] ; g_pGCHeap
0x7ff7fe1032d2 <+114>: mov r15, qword ptr [rax]
0x7ff7fe1032d5 <+117>: mov eax, ebx
0x7ff7fe1032d7 <+119>: shr eax, 0x17
0x7ff7fe1032da <+122>: and eax, 0x2
0x7ff7fe1032dd <+125>: shr ebx, 0x14
0x7ff7fe1032e0 <+128>: and ebx, 0x1
0x7ff7fe1032e3 <+131>: or ebx, eax
0x7ff7fe1032e5 <+133>: mov r13, qword ptr [r15]
0x7ff7fe1032e8 <+136>: lea rdi, [rip + 0x6f0b90] ; + 296
0x7ff7fe1032f0 <+144>: call 0x7ff7fdfe56e0 ; symbol stub for: __tls_get_addr
0x7ff7fe1032f8 <+152>: mov rsi, qword ptr [rax]
0x7ff7fe1032fb <+155>: add rsi, 0x60
0x7ff7fe1032ff <+159>: mov rdi, r15
0x7ff7fe103302 <+162>: mov rdx, r12
0x7ff7fe103305 <+165>: mov ecx, ebx
0x7ff7fe103307 <+167>: call qword ptr [r13 + 0x98] //CALL TO libcoreclr.so`WKS::GCHeap::Alloc
0x7ff7fe10330e <+174>: mov r15, rax //Return from libcoreclr.so`WKS::GCHeap::Alloc is stored in r15
0x7ff7fe103311 <+177>: cmp r12, 0x14c08
-> 0x7ff7fe103318 <+184>: mov qword ptr [r15], r14 //SEGSIV when dereferencing r15
(lldb) register read r15
r15 = 0x00007ff768d38068
(lldb) memory read -f x -s 8 -c 8 0x00007ff768d38068
error: core file does not contain 0x7ff768d38068 | reli | sigsegv libcoreclr so allocateobject the notes in this bug refer to the ubuntu dump other dumps are available if needed to repro this issue you will likely have to disaable the oom killer and dissable memory overcommit using the following directions append these lines to etc sysctl conf vm oom kill vm overcommit memory the issue here appears to be actually rooted in libcoreclr so wks gcheap alloc it seems that gcheap alloc is failing due to an oom however it does not return null after failing to allocate instead it appears to return a bogus address which is not actually in the address space of the process this in turn causes calling code to segsiv looking at the dump we can see we are in an eh path which is trying to allocate an exception this calls into libcoreclr so allocateobject which fails due to a segsiv stop reason sigsegv fault symbol libcoreclr so allocateobject failure hash sigsegv libcoreclr so allocateobject fault stack libcoreclr so allocateobject methodtable libcoreclr so eeexception createthrowable libcoreclr so createcomplusexceptionobject thread exception record int libcoreclr so exceptiontracker getorcreatetracker unsigned long stackframe exception record context int bool exceptiontracker stacktracestate libcoreclr so processclrexception libcoreclr so pal sehexception context libcoreclr so dispatchmanagedexception pal sehexception libcoreclr so handlehardwareexception pal sehexception libcoreclr so sehprocessexception exception pointers libcoreclr so common signal handler int siginfo t void int libcoreclr so sigsegv handler int siginfo t void libclrjit so sigsegv handler int siginfo t void libpthread so lll unlock wake mscorlib ni dll system reflection emit dynamicmethod getmethoddescriptor mscorlib ni dll system reflection emit dynamicmethod createdelegate system type system object system linq expressions dll system linq expressions compiler lambdacompiler compile system linq expressions lambdaexpression system linq expressions dll system linq expressions expression compile boolean system linq queryable dll system linq enumerableexecutor execute system linq queryable dll domainboundilstubclass il stub instantiatingstub system linq expressions expression system linq queryable dll system linq queryable count system linq iqueryable system linq queryable tests dll system linq tests groupbytests exe stress generated unittests stress execution dll stress execution unittest execute stress execution dll stress execution dedicatedthreadworkerstrategy runworker stress execution itestpattern system threading cancellationtoken stress execution dll stress execution dedicatedthreadworkerstrategy c b mscorlib ni dll system threading tasks task execute mscorlib ni dll system threading executioncontext run system threading executioncontext system threading contextcallback system object mscorlib ni dll system threading tasks task executewiththreadlocal system threading tasks task byref mscorlib ni dll system threading tasks task executeentry boolean mscorlib ni dll system threading executioncontext run system threading executioncontext system threading contextcallback system object fault thread thread tid libcoreclr so allocateobject methodtable name corerun stop reason signal sigsegv last exception there is no current managed exception on this thread dissasembling libcoreclr so allocateobject shows that we are calling into libcoreclr so wks gcheap alloc which actually returns a non null address in stored in however the address doesn t appear to be in the address space of the process so when we derefernce it we segsiv lldb disassemble f intel libcoreclr so allocateobject push rbp mov rbp rsp push push push push push rbx sub rsp mov rdi mov rax qword ptr fs mov qword ptr rax lea rax g ibclogger cmp dword ptr je mov rdi call ibclogger logmethodtableaccessstatic void const lea rdi call symbol stub for tls get addr mov rax qword ptr mov qword ptr mov ebx dword ptr test ebx je xor esi esi mov rdi call preparecriticalfinalizerobject methodtable module mov ebx dword ptr mov dword ptr lea rax g pgcheap mov qword ptr mov eax ebx shr eax and eax shr ebx and ebx or ebx eax mov qword ptr lea rdi call symbol stub for tls get addr mov rsi qword ptr add rsi mov rdi mov rdx mov ecx ebx call qword ptr call to libcoreclr so wks gcheap alloc mov rax return from libcoreclr so wks gcheap alloc is stored in cmp mov qword ptr segsiv when dereferencing lldb register read lldb memory read f x s c error core file does not contain | 1 |
154,358 | 12,201,440,931 | IssuesEvent | 2020-04-30 07:14:03 | plazi/BLR-website | https://api.github.com/repos/plazi/BLR-website | reopened | BLR website testing: access to data: API | testing BLR UI website | Late at night, what is obvious what we forgot and is the real strenght, is that we have one, may be two APIs. Zenodeo, and and Zenodo.
We really must provide a link for API users on the front page, or at least a level below, and we must make a strong statement that this website is just scratching the surface of what could be done by the ingenious users of the data.
The goal of BLR is inspiring users to make use of the data, and with that wanting to add more data to BLR to make it even more powerful.

| 1.0 | BLR website testing: access to data: API - Late at night, what is obvious what we forgot and is the real strenght, is that we have one, may be two APIs. Zenodeo, and and Zenodo.
We really must provide a link for API users on the front page, or at least a level below, and we must make a strong statement that this website is just scratching the surface of what could be done by the ingenious users of the data.
The goal of BLR is inspiring users to make use of the data, and with that wanting to add more data to BLR to make it even more powerful.

| non_reli | blr website testing access to data api late at night what is obvious what we forgot and is the real strenght is that we have one may be two apis zenodeo and and zenodo we really must provide a link for api users on the front page or at least a level below and we must make a strong statement that this website is just scratching the surface of what could be done by the ingenious users of the data the goal of blr is inspiring users to make use of the data and with that wanting to add more data to blr to make it even more powerful | 0 |
473,460 | 13,642,672,400 | IssuesEvent | 2020-09-25 15:52:34 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [studio] Update delete to become a flag on the record of a publishable operation | enhancement priority: high | Deleting an item or a package will become a flag or similar on the item record.
Publishing of delete is no longer immediate and can be treated like any create/update event.
The publisher will execute the delete as a command much like create/update by deleting the file in `published`.
Ping me to finalize the design. | 1.0 | [studio] Update delete to become a flag on the record of a publishable operation - Deleting an item or a package will become a flag or similar on the item record.
Publishing of delete is no longer immediate and can be treated like any create/update event.
The publisher will execute the delete as a command much like create/update by deleting the file in `published`.
Ping me to finalize the design. | non_reli | update delete to become a flag on the record of a publishable operation deleting an item or a package will become a flag or similar on the item record publishing of delete is no longer immediate and can be treated like any create update event the publisher will execute the delete as a command much like create update by deleting the file in published ping me to finalize the design | 0 |
905 | 11,580,116,530 | IssuesEvent | 2020-02-21 19:26:10 | microsoft/azuredatastudio | https://api.github.com/repos/microsoft/azuredatastudio | closed | Potential listener leak at new View needs to be investigated | Area - Reliability Bug Impact: Stress Triage: Done | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed. -->
<!-- Use Help > Report Issue to prefill these. -->
- Azure Data Studio Version:
Master
Steps to Reproduce:
1. SQL Nb Test - Stress mode after around ~32 iterations pops the following potential leak:
[8860:0531/092052.107:INFO:CONSOLE(388)] " at new View (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/view/viewImpl.js:22:41)
at StandaloneCodeEditor._createView (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:1066:26)
at StandaloneCodeEditor._attachModel (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:969:46)
at StandaloneCodeEditor.setModel (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:241:18)
at input.resolve.then.then.editorModel (file:///E:/mssqltoolsagentworkspace/79/s/out/sql/workbench/electron-browser/modelComponents/queryTextEditor.js:62:56)
at ZoneDelegate.invoke (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:388:26)
at Object.onInvoke (e:\mssqltoolsagentworkspace\79\s\node_modules\@angular\core\bundles\core.umd.js:4156:37)
at ZoneDelegate.invoke (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:387:32)
at Zone.run (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:138:43)
at E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:872:34
at ZoneDelegate.invokeTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:421:31)
at Object.onInvokeTask (e:\mssqltoolsagentworkspace\79\s\node_modules\@angular\core\bundles\core.umd.js:4147:37)
at ZoneDelegate.invokeTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:420:36)
at Zone.runTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:188:47)
at drainMicroTaskQueue (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:595:35)", source: E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js (388)
Stress run and logs:
https://mssqltools.visualstudio.com/CrossPlatBuildScripts/_build/results?buildId=32767
[StressRun_log_22_32767.zip](https://github.com/microsoft/azuredatastudio/files/3249055/StressRun_log_22_32767.zip)
| True | Potential listener leak at new View needs to be investigated - <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed. -->
<!-- Use Help > Report Issue to prefill these. -->
- Azure Data Studio Version:
Master
Steps to Reproduce:
1. SQL Nb Test - Stress mode after around ~32 iterations pops the following potential leak:
[8860:0531/092052.107:INFO:CONSOLE(388)] " at new View (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/view/viewImpl.js:22:41)
at StandaloneCodeEditor._createView (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:1066:26)
at StandaloneCodeEditor._attachModel (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:969:46)
at StandaloneCodeEditor.setModel (file:///E:/mssqltoolsagentworkspace/79/s/out/vs/editor/browser/widget/codeEditorWidget.js:241:18)
at input.resolve.then.then.editorModel (file:///E:/mssqltoolsagentworkspace/79/s/out/sql/workbench/electron-browser/modelComponents/queryTextEditor.js:62:56)
at ZoneDelegate.invoke (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:388:26)
at Object.onInvoke (e:\mssqltoolsagentworkspace\79\s\node_modules\@angular\core\bundles\core.umd.js:4156:37)
at ZoneDelegate.invoke (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:387:32)
at Zone.run (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:138:43)
at E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:872:34
at ZoneDelegate.invokeTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:421:31)
at Object.onInvokeTask (e:\mssqltoolsagentworkspace\79\s\node_modules\@angular\core\bundles\core.umd.js:4147:37)
at ZoneDelegate.invokeTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:420:36)
at Zone.runTask (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:188:47)
at drainMicroTaskQueue (E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js:595:35)", source: E:\mssqltoolsagentworkspace\79\s\node_modules\zone.js\dist\zone-node.js (388)
Stress run and logs:
https://mssqltools.visualstudio.com/CrossPlatBuildScripts/_build/results?buildId=32767
[StressRun_log_22_32767.zip](https://github.com/microsoft/azuredatastudio/files/3249055/StressRun_log_22_32767.zip)
| reli | potential listener leak at new view needs to be investigated report issue to prefill these azure data studio version master steps to reproduce sql nb test stress mode after around iterations pops the following potential leak at new view file e mssqltoolsagentworkspace s out vs editor browser view viewimpl js at standalonecodeeditor createview file e mssqltoolsagentworkspace s out vs editor browser widget codeeditorwidget js at standalonecodeeditor attachmodel file e mssqltoolsagentworkspace s out vs editor browser widget codeeditorwidget js at standalonecodeeditor setmodel file e mssqltoolsagentworkspace s out vs editor browser widget codeeditorwidget js at input resolve then then editormodel file e mssqltoolsagentworkspace s out sql workbench electron browser modelcomponents querytexteditor js at zonedelegate invoke e mssqltoolsagentworkspace s node modules zone js dist zone node js at object oninvoke e mssqltoolsagentworkspace s node modules angular core bundles core umd js at zonedelegate invoke e mssqltoolsagentworkspace s node modules zone js dist zone node js at zone run e mssqltoolsagentworkspace s node modules zone js dist zone node js at e mssqltoolsagentworkspace s node modules zone js dist zone node js at zonedelegate invoketask e mssqltoolsagentworkspace s node modules zone js dist zone node js at object oninvoketask e mssqltoolsagentworkspace s node modules angular core bundles core umd js at zonedelegate invoketask e mssqltoolsagentworkspace s node modules zone js dist zone node js at zone runtask e mssqltoolsagentworkspace s node modules zone js dist zone node js at drainmicrotaskqueue e mssqltoolsagentworkspace s node modules zone js dist zone node js source e mssqltoolsagentworkspace s node modules zone js dist zone node js stress run and logs | 1 |
84,726 | 15,728,258,573 | IssuesEvent | 2021-03-29 13:37:37 | ssobue/spring-preauth-session | https://api.github.com/repos/ssobue/spring-preauth-session | closed | CVE-2020-5421 (Medium) detected in spring-web-5.1.10.RELEASE.jar | security vulnerability | ## CVE-2020-5421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: spring-preauth-session/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-web/5.1.10.RELEASE/spring-web-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.10.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-5421 (Medium) detected in spring-web-5.1.10.RELEASE.jar - ## CVE-2020-5421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: spring-preauth-session/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-web/5.1.10.RELEASE/spring-web-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.10.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in spring web release jar cve medium severity vulnerability vulnerable library spring web release jar spring web library home page a href path to dependency file spring preauth session pom xml path to vulnerable library root repository org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library vulnerability details in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web org springframework spring web org springframework spring web org springframework spring web step up your open source security game with whitesource | 0 |
793 | 10,545,589,528 | IssuesEvent | 2019-10-02 19:29:32 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection with GCStress | area-System.Net bug os-windows tenet-reliability up-for-grabs | Test failed in System.Net.HttpListener.Tests.dll, the test output log as following:
set COMPlus_GCStress=3
set XUNIT_PERFORMANCE_MIN_ITERATION=1
set XUNIT_PERFORMANCE_MAX_ITERATION=1
call F:\repos\corefx\bin\testhost\netcoreapp-Windows_NT-Debug-x64\\dotnet.exe xunit.console.netcore.exe System.Net.HttpListener.Tests.dll -xml testResults.xml -notrait category=nonnetcoreapptests -notrait category=failing -notrait category=nonwindowstests
xUnit.net console test runner (64-bit .NET Core)
Copyright (C) 2014 Outercurve Foundation.
Discovering: System.Net.HttpListener.Tests
Discovered: System.Net.HttpListener.Tests
Starting: System.Net.HttpListener.Tests
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthAsynchronous_Success(transferEncodingChunked: True) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(318,0): at System.Net.HttpRequestStream.BeginRead(Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
at System.IO.Stream.<>c.<BeginEndReadAsync>b__43_0(Stream stream, ReadWriteParameters args, AsyncCallback callback, Object state)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim[TInstance,TArgs](TInstance thisRef, TArgs args, Func`5 beginMethod, Func`3 endMethod)
at System.IO.Stream.BeginEndReadAsync(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(149,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthAsynchronous_Success>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthAsynchronous_Success(transferEncodingChunked: False) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(318,0): at System.Net.HttpRequestStream.BeginRead(Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
at System.IO.Stream.<>c.<BeginEndReadAsync>b__43_0(Stream stream, ReadWriteParameters args, AsyncCallback callback, Object state)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim[TInstance,TArgs](TInstance thisRef, TArgs args, Func`5 beginMethod, Func`3 endMethod)
at System.IO.Stream.BeginEndReadAsync(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(149,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthAsynchronous_Success>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthSynchronous_Success(transferEncodingChunked: True) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(195,0): at System.Net.HttpRequestStream.Read(Byte[] buffer, Int32 offset, Int32 size)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(193,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthSynchronous_Success>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthSynchronous_Success(transferEncodingChunked: False) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(195,0): at System.Net.HttpRequestStream.Read(Byte[] buffer, Int32 offset, Int32 size)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(193,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthSynchronous_Success>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_FullLengthSynchronous_Success(transferEncodingChunked: True, text: \"\") [FAIL]**
System.Net.Http.HttpRequestException : An error occurred while sending the request.
---- System.Net.Http.WinHttpException : The server returned an invalid or unrecognized response
Stack Trace:
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
F:\repos\corefx\src\System.Net.Http\src\System\Net\Http\HttpClient.cs(462,0): at System.Net.Http.HttpClient.<FinishSendAsyncBuffered>d__58.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(116,0): at System.Net.Tests.HttpRequestStreamTests.<Read_FullLengthSynchronous_Success>d__6.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
----- Inner Stack Trace -----
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
F:\repos\corefx\src\Common\src\System\Threading\Tasks\RendezvousAwaitable.cs(62,0): at System.Threading.Tasks.RendezvousAwaitable`1.GetResult()
F:\repos\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpHandler.cs(863,0): at System.Net.Http.WinHttpHandler.<StartRequest>d__105.MoveNext()
Finished: System.Net.HttpListener.Tests
=== TEST EXECUTION SUMMARY ===
System.Net.HttpListener.Tests Total: 104, Errors: 0, Failed: 5, Skipped: 0, Time: 1055.777s
Finished running tests. End time= 0:33:32.67, Exit code = 1 | True | System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection with GCStress - Test failed in System.Net.HttpListener.Tests.dll, the test output log as following:
set COMPlus_GCStress=3
set XUNIT_PERFORMANCE_MIN_ITERATION=1
set XUNIT_PERFORMANCE_MAX_ITERATION=1
call F:\repos\corefx\bin\testhost\netcoreapp-Windows_NT-Debug-x64\\dotnet.exe xunit.console.netcore.exe System.Net.HttpListener.Tests.dll -xml testResults.xml -notrait category=nonnetcoreapptests -notrait category=failing -notrait category=nonwindowstests
xUnit.net console test runner (64-bit .NET Core)
Copyright (C) 2014 Outercurve Foundation.
Discovering: System.Net.HttpListener.Tests
Discovered: System.Net.HttpListener.Tests
Starting: System.Net.HttpListener.Tests
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthAsynchronous_Success(transferEncodingChunked: True) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(318,0): at System.Net.HttpRequestStream.BeginRead(Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
at System.IO.Stream.<>c.<BeginEndReadAsync>b__43_0(Stream stream, ReadWriteParameters args, AsyncCallback callback, Object state)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim[TInstance,TArgs](TInstance thisRef, TArgs args, Func`5 beginMethod, Func`3 endMethod)
at System.IO.Stream.BeginEndReadAsync(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(149,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthAsynchronous_Success>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthAsynchronous_Success(transferEncodingChunked: False) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(318,0): at System.Net.HttpRequestStream.BeginRead(Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
at System.IO.Stream.<>c.<BeginEndReadAsync>b__43_0(Stream stream, ReadWriteParameters args, AsyncCallback callback, Object state)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim[TInstance,TArgs](TInstance thisRef, TArgs args, Func`5 beginMethod, Func`3 endMethod)
at System.IO.Stream.BeginEndReadAsync(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
at System.IO.Stream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(149,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthAsynchronous_Success>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthSynchronous_Success(transferEncodingChunked: True) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(195,0): at System.Net.HttpRequestStream.Read(Byte[] buffer, Int32 offset, Int32 size)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(193,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthSynchronous_Success>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_LargeLengthSynchronous_Success(transferEncodingChunked: False) [FAIL]**
System.Net.HttpListenerException : An operation was attempted on a nonexistent network connection
Stack Trace:
F:\repos\corefx\src\System.Net.HttpListener\src\System\Net\Windows\HttpRequestStream.Windows.cs(195,0): at System.Net.HttpRequestStream.Read(Byte[] buffer, Int32 offset, Int32 size)
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(193,0): at System.Net.Tests.HttpRequestStreamTests.<Read_LargeLengthSynchronous_Success>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
**System.Net.Tests.HttpRequestStreamTests.Read_FullLengthSynchronous_Success(transferEncodingChunked: True, text: \"\") [FAIL]**
System.Net.Http.HttpRequestException : An error occurred while sending the request.
---- System.Net.Http.WinHttpException : The server returned an invalid or unrecognized response
Stack Trace:
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
F:\repos\corefx\src\System.Net.Http\src\System\Net\Http\HttpClient.cs(462,0): at System.Net.Http.HttpClient.<FinishSendAsyncBuffered>d__58.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
F:\repos\corefx\src\System.Net.HttpListener\tests\HttpRequestStreamTests.cs(116,0): at System.Net.Tests.HttpRequestStreamTests.<Read_FullLengthSynchronous_Success>d__6.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
----- Inner Stack Trace -----
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
F:\repos\corefx\src\Common\src\System\Threading\Tasks\RendezvousAwaitable.cs(62,0): at System.Threading.Tasks.RendezvousAwaitable`1.GetResult()
F:\repos\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpHandler.cs(863,0): at System.Net.Http.WinHttpHandler.<StartRequest>d__105.MoveNext()
Finished: System.Net.HttpListener.Tests
=== TEST EXECUTION SUMMARY ===
System.Net.HttpListener.Tests Total: 104, Errors: 0, Failed: 5, Skipped: 0, Time: 1055.777s
Finished running tests. End time= 0:33:32.67, Exit code = 1 | reli | system net httplistenerexception an operation was attempted on a nonexistent network connection with gcstress test failed in system net httplistener tests dll the test output log as following set complus gcstress set xunit performance min iteration set xunit performance max iteration call f repos corefx bin testhost netcoreapp windows nt debug dotnet exe xunit console netcore exe system net httplistener tests dll xml testresults xml notrait category nonnetcoreapptests notrait category failing notrait category nonwindowstests xunit net console test runner bit net core copyright c outercurve foundation discovering system net httplistener tests discovered system net httplistener tests starting system net httplistener tests system net tests httprequeststreamtests read largelengthasynchronous success transferencodingchunked true system net httplistenerexception an operation was attempted on a nonexistent network connection stack trace f repos corefx src system net httplistener src system net windows httprequeststream windows cs at system net httprequeststream beginread byte buffer offset size asynccallback callback object state at system io stream c b stream stream readwriteparameters args asynccallback callback object state at system threading tasks taskfactory fromasynctrim tinstance thisref targs args func beginmethod func endmethod at system io stream beginendreadasync byte buffer offset count at system io stream readasync byte buffer offset count cancellationtoken cancellationtoken at system io stream readasync byte buffer offset count f repos corefx src system net httplistener tests httprequeststreamtests cs at system net tests httprequeststreamtests d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task system net tests httprequeststreamtests read largelengthasynchronous success transferencodingchunked false system net httplistenerexception an operation was attempted on a nonexistent network connection stack trace f repos corefx src system net httplistener src system net windows httprequeststream windows cs at system net httprequeststream beginread byte buffer offset size asynccallback callback object state at system io stream c b stream stream readwriteparameters args asynccallback callback object state at system threading tasks taskfactory fromasynctrim tinstance thisref targs args func beginmethod func endmethod at system io stream beginendreadasync byte buffer offset count at system io stream readasync byte buffer offset count cancellationtoken cancellationtoken at system io stream readasync byte buffer offset count f repos corefx src system net httplistener tests httprequeststreamtests cs at system net tests httprequeststreamtests d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task system net tests httprequeststreamtests read largelengthsynchronous success transferencodingchunked true system net httplistenerexception an operation was attempted on a nonexistent network connection stack trace f repos corefx src system net httplistener src system net windows httprequeststream windows cs at system net httprequeststream read byte buffer offset size f repos corefx src system net httplistener tests httprequeststreamtests cs at system net tests httprequeststreamtests d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task system net tests httprequeststreamtests read largelengthsynchronous success transferencodingchunked false system net httplistenerexception an operation was attempted on a nonexistent network connection stack trace f repos corefx src system net httplistener src system net windows httprequeststream windows cs at system net httprequeststream read byte buffer offset size f repos corefx src system net httplistener tests httprequeststreamtests cs at system net tests httprequeststreamtests d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task system net tests httprequeststreamtests read fulllengthsynchronous success transferencodingchunked true text system net http httprequestexception an error occurred while sending the request system net http winhttpexception the server returned an invalid or unrecognized response stack trace at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult f repos corefx src system net http src system net http httpclient cs at system net http httpclient d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter getresult f repos corefx src system net httplistener tests httprequeststreamtests cs at system net tests httprequeststreamtests d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task inner stack trace at system runtime exceptionservices exceptiondispatchinfo throw f repos corefx src common src system threading tasks rendezvousawaitable cs at system threading tasks rendezvousawaitable getresult f repos corefx src system net http winhttphandler src system net http winhttphandler cs at system net http winhttphandler d movenext finished system net httplistener tests test execution summary system net httplistener tests total errors failed skipped time finished running tests end time exit code | 1 |
1,382 | 15,704,914,049 | IssuesEvent | 2021-03-26 15:32:14 | FoundationDB/fdb-kubernetes-operator | https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator | closed | Self conflict | question reliability | I'm not entirely sure how this comes about, but the operator manages to self-conflict fairly regularly. <details>
<summary>Here's a log snippet</summary>
```
2020-06-19T03:23:29.742Z INFO controller Retrying reconcilation {"reason": "Waiting for pod fdb-test/fdb-test-cluster/fdb-test-cluster-stateless-9 to be ready"}
2020-06-19T03:24:08.122Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/286657896", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:14.732Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:17.904Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/262145242", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:22.624Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:22.639Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status minimal", "-C", "/tmp/018675996", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:22.931Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "The database is avai..."}
2020-06-19T03:24:22.931Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/018675996", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.251Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:23.264Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/537358283", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.550Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:23.602Z INFO controller Reconciliation terminated early {"namespace": "fdb-test", "name": "fdb-test-cluster", "lastAction": "controllers.IncludeInstances"}
2020-06-19T03:24:23.602Z INFO controller Ending reconciliation early because cluster has been updated
2020-06-19T03:24:23.602Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "foundationdbcluster", "request": "fdb-test/fdb-test-cluster"}
2020-06-19T03:24:23.603Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/341383696", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.908Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:24.444Z ERROR controller Error updating cluster status {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "error": "Operation cannot be fulfilled on foundationdbclusters.apps.foundationdb.org \"fdb-test-cluster\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/FoundationDB/fdb-kubernetes-operator/controllers.UpdateStatus.Reconcile
/workspace/controllers/update_status.go:256
github.com/FoundationDB/fdb-kubernetes-operator/controllers.(*FoundationDBClusterReconciler).Reconcile
/workspace/controllers/cluster_controller.go:123
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:88
2020-06-19T03:24:24.444Z INFO controller Reconciliation terminated early {"namespace": "fdb-test", "name": "fdb-test-cluster", "lastAction": "controllers.UpdateStatus"}
2020-06-19T03:24:24.444Z INFO controller Retrying reconcilation {"reason": "Conflict"}
```
</details>
Only one instance of the operator was running at this time :). This is perhaps harmless, but at a minimum it suggests a lack of serialisation somewhere within the operator - e.g. we need to be feeding work into a single goroutine somewhere per cluster rather than updating directly. Or perhaps the basis object used was kept live too long and separate calls were made to update the cluster object by the operator - I haven't dig through the interacting set of flows yet to make an strong case: but note these two lines that suggest the same object had two reconciliations active at once; something that at a minimum could lead to rather savage bugs.
```
2020-06-19T03:24:23.602Z INFO controller Ending reconciliation early because cluster has been updated
2020-06-19T03:24:23.602Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "foundationdbcluster", "request": "fdb-test/fdb-test-cluster"}
``` | True | Self conflict - I'm not entirely sure how this comes about, but the operator manages to self-conflict fairly regularly. <details>
<summary>Here's a log snippet</summary>
```
2020-06-19T03:23:29.742Z INFO controller Retrying reconcilation {"reason": "Waiting for pod fdb-test/fdb-test-cluster/fdb-test-cluster-stateless-9 to be ready"}
2020-06-19T03:24:08.122Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/286657896", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:14.732Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:17.904Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/262145242", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:22.624Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:22.639Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status minimal", "-C", "/tmp/018675996", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:22.931Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "The database is avai..."}
2020-06-19T03:24:22.931Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/018675996", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.251Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:23.264Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/537358283", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.550Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:23.602Z INFO controller Reconciliation terminated early {"namespace": "fdb-test", "name": "fdb-test-cluster", "lastAction": "controllers.IncludeInstances"}
2020-06-19T03:24:23.602Z INFO controller Ending reconciliation early because cluster has been updated
2020-06-19T03:24:23.602Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "foundationdbcluster", "request": "fdb-test/fdb-test-cluster"}
2020-06-19T03:24:23.603Z INFO controller Running command {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "path": "/usr/bin/fdb/6.2/fdbcli", "args": ["/usr/bin/fdb/6.2/fdbcli", "--exec", "status json", "-C", "/tmp/341383696", "--log", "--timeout", "30", "--log-dir", "/var/log/fdb"]}
2020-06-19T03:24:23.908Z INFO controller Command completed {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "output": "{\n \"client\" : {\n ..."}
2020-06-19T03:24:24.444Z ERROR controller Error updating cluster status {"namespace": "fdb-test", "cluster": "fdb-test-cluster", "error": "Operation cannot be fulfilled on foundationdbclusters.apps.foundationdb.org \"fdb-test-cluster\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/FoundationDB/fdb-kubernetes-operator/controllers.UpdateStatus.Reconcile
/workspace/controllers/update_status.go:256
github.com/FoundationDB/fdb-kubernetes-operator/controllers.(*FoundationDBClusterReconciler).Reconcile
/workspace/controllers/cluster_controller.go:123
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/go/pkg/mod/k8s.io/apimachinery@v0.17.0/pkg/util/wait/wait.go:88
2020-06-19T03:24:24.444Z INFO controller Reconciliation terminated early {"namespace": "fdb-test", "name": "fdb-test-cluster", "lastAction": "controllers.UpdateStatus"}
2020-06-19T03:24:24.444Z INFO controller Retrying reconcilation {"reason": "Conflict"}
```
</details>
Only one instance of the operator was running at this time :). This is perhaps harmless, but at a minimum it suggests a lack of serialisation somewhere within the operator - e.g. we need to be feeding work into a single goroutine somewhere per cluster rather than updating directly. Or perhaps the basis object used was kept live too long and separate calls were made to update the cluster object by the operator - I haven't dig through the interacting set of flows yet to make an strong case: but note these two lines that suggest the same object had two reconciliations active at once; something that at a minimum could lead to rather savage bugs.
```
2020-06-19T03:24:23.602Z INFO controller Ending reconciliation early because cluster has been updated
2020-06-19T03:24:23.602Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "foundationdbcluster", "request": "fdb-test/fdb-test-cluster"}
``` | reli | self conflict i m not entirely sure how this comes about but the operator manages to self conflict fairly regularly here s a log snippet info controller retrying reconcilation reason waiting for pod fdb test fdb test cluster fdb test cluster stateless to be ready info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output n client n info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output n client n info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output the database is avai info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output n client n info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output n client n info controller reconciliation terminated early namespace fdb test name fdb test cluster lastaction controllers includeinstances info controller ending reconciliation early because cluster has been updated debug controller runtime controller successfully reconciled controller foundationdbcluster request fdb test fdb test cluster info controller running command namespace fdb test cluster fdb test cluster path usr bin fdb fdbcli args info controller command completed namespace fdb test cluster fdb test cluster output n client n error controller error updating cluster status namespace fdb test cluster fdb test cluster error operation cannot be fulfilled on foundationdbclusters apps foundationdb org fdb test cluster the object has been modified please apply your changes to the latest version and try again github com go logr zapr zaplogger error go pkg mod github com go logr zapr zapr go github com foundationdb fdb kubernetes operator controllers updatestatus reconcile workspace controllers update status go github com foundationdb fdb kubernetes operator controllers foundationdbclusterreconciler reconcile workspace controllers cluster controller go sigs io controller runtime pkg internal controller controller reconcilehandler go pkg mod sigs io controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller processnextworkitem go pkg mod sigs io controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller worker go pkg mod sigs io controller runtime pkg internal controller controller go io apimachinery pkg util wait jitteruntil go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait jitteruntil go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait until go pkg mod io apimachinery pkg util wait wait go info controller reconciliation terminated early namespace fdb test name fdb test cluster lastaction controllers updatestatus info controller retrying reconcilation reason conflict only one instance of the operator was running at this time this is perhaps harmless but at a minimum it suggests a lack of serialisation somewhere within the operator e g we need to be feeding work into a single goroutine somewhere per cluster rather than updating directly or perhaps the basis object used was kept live too long and separate calls were made to update the cluster object by the operator i haven t dig through the interacting set of flows yet to make an strong case but note these two lines that suggest the same object had two reconciliations active at once something that at a minimum could lead to rather savage bugs info controller ending reconciliation early because cluster has been updated debug controller runtime controller successfully reconciled controller foundationdbcluster request fdb test fdb test cluster | 1 |
191,303 | 22,215,739,941 | IssuesEvent | 2022-06-08 01:18:45 | Nivaskumark/kernel_v4.1.15 | https://api.github.com/repos/Nivaskumark/kernel_v4.1.15 | reopened | CVE-2019-5489 (Medium) detected in linuxlinux-4.6 | security vulnerability | ## CVE-2019-5489 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mincore.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mincore.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The mincore() implementation in mm/mincore.c in the Linux kernel through 4.19.13 allowed local attackers to observe page cache access patterns of other processes on the same system, potentially allowing sniffing of secret information. (Fixing this affects the output of the fincore program.) Limited remote exploitation may be possible, as demonstrated by latency differences in accessing public files from an Apache HTTP Server.
<p>Publish Date: 2019-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5489>CVE-2019-5489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: v5.0-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-5489 (Medium) detected in linuxlinux-4.6 - ## CVE-2019-5489 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mincore.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/mincore.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The mincore() implementation in mm/mincore.c in the Linux kernel through 4.19.13 allowed local attackers to observe page cache access patterns of other processes on the same system, potentially allowing sniffing of secret information. (Fixing this affects the output of the fincore program.) Limited remote exploitation may be possible, as demonstrated by latency differences in accessing public files from an Apache HTTP Server.
<p>Publish Date: 2019-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5489>CVE-2019-5489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: v5.0-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files mm mincore c mm mincore c vulnerability details the mincore implementation in mm mincore c in the linux kernel through allowed local attackers to observe page cache access patterns of other processes on the same system potentially allowing sniffing of secret information fixing this affects the output of the fincore program limited remote exploitation may be possible as demonstrated by latency differences in accessing public files from an apache http server publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
3,037 | 31,791,106,037 | IssuesEvent | 2023-09-13 03:26:54 | hackforla/ops | https://api.github.com/repos/hackforla/ops | closed | [SPIKE] Set up AWS billing notifications alarms | size: 2pt role: Site Reliability Engineer feature: monitoring | ### Overview
Currently, we have no safe guards in place if an AWS service costs more than anticipated. AWS has measures in place with tools like Cloudwatch. Let's investigate how to set up a cloudwatch alarm and see if we can set up notifications
### Action Items
- [x] Investigate HFLA's incubator AWS setup
- [x] Determine a general implementation and potential costs
- [x] Determine how and who gets notifications
- [ ] Create implementation issue
- [ ] Create CloudWatch documentation issue
### Resources/Instructions
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html
| True | [SPIKE] Set up AWS billing notifications alarms - ### Overview
Currently, we have no safe guards in place if an AWS service costs more than anticipated. AWS has measures in place with tools like Cloudwatch. Let's investigate how to set up a cloudwatch alarm and see if we can set up notifications
### Action Items
- [x] Investigate HFLA's incubator AWS setup
- [x] Determine a general implementation and potential costs
- [x] Determine how and who gets notifications
- [ ] Create implementation issue
- [ ] Create CloudWatch documentation issue
### Resources/Instructions
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html
| reli | set up aws billing notifications alarms overview currently we have no safe guards in place if an aws service costs more than anticipated aws has measures in place with tools like cloudwatch let s investigate how to set up a cloudwatch alarm and see if we can set up notifications action items investigate hfla s incubator aws setup determine a general implementation and potential costs determine how and who gets notifications create implementation issue create cloudwatch documentation issue resources instructions | 1 |
149 | 4,343,371,636 | IssuesEvent | 2016-07-29 01:21:14 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Gargantuan source files can be silently treated as being much smaller | Area-Compilers Bug Tenet-Reliability | **Version Used**: 1.3.1.60616 and recent sync to master
**Steps to Reproduce**:
* Run the following script
``` C#
string program = "class P { static void Main() { System.Console.WriteLine(\"hello\"); } }";
string garbage = "@#%@#^@#^!#%#@$%@^";
File.WriteAllText("big.cs", program + garbage, Encoding.ASCII);
using (var s = File.OpenWrite("big.cs"))
{
s.SetLength((long)uint.MaxValue + 1 + program.Length);
}
```
* `csc big.cs`
**Expected Behavior**:
Compilation fails (either with a deliberate diagnostic that the stream is too long or with the correct errors that match the full text).
**Actual Behavior**:
Compilation succeeds as it only reads up to `program.Length` due to unchecked cast of stream length to `int`. | True | Gargantuan source files can be silently treated as being much smaller - **Version Used**: 1.3.1.60616 and recent sync to master
**Steps to Reproduce**:
* Run the following script
``` C#
string program = "class P { static void Main() { System.Console.WriteLine(\"hello\"); } }";
string garbage = "@#%@#^@#^!#%#@$%@^";
File.WriteAllText("big.cs", program + garbage, Encoding.ASCII);
using (var s = File.OpenWrite("big.cs"))
{
s.SetLength((long)uint.MaxValue + 1 + program.Length);
}
```
* `csc big.cs`
**Expected Behavior**:
Compilation fails (either with a deliberate diagnostic that the stream is too long or with the correct errors that match the full text).
**Actual Behavior**:
Compilation succeeds as it only reads up to `program.Length` due to unchecked cast of stream length to `int`. | reli | gargantuan source files can be silently treated as being much smaller version used and recent sync to master steps to reproduce run the following script c string program class p static void main system console writeline hello string garbage file writealltext big cs program garbage encoding ascii using var s file openwrite big cs s setlength long uint maxvalue program length csc big cs expected behavior compilation fails either with a deliberate diagnostic that the stream is too long or with the correct errors that match the full text actual behavior compilation succeeds as it only reads up to program length due to unchecked cast of stream length to int | 1 |
240,346 | 7,801,122,059 | IssuesEvent | 2018-06-09 17:16:57 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0013106:
Update Tinebase from 10.16 to 10.24 fails | Bug Mantis Setup high priority | **Reported by ingoratsdorf on 20 May 2017 00:03**
Tinebase introduced a filesystem preview in 10.23, however during upgrade from 10.16 to 10.17 there is a call for filesystem.stat that will fail as Tinebase tries a DB JOIN LEFT with the preview field and subsequently fails (as preview db field will only be added in 10.23.
The stat() call in enclosed in try..catch but only catches ExceptionNotFound, not DB exceptions, hence the upgrade fails with an uncaught exception.
**Steps to reproduce:** Tinabase/Setup/Update/Release10.php, line 803:
protected function _migrateAclForApplication($application, $type)
{
$path = Tinebase_FileSystem::getInstance()->getApplicationBasePath(
$application->name,
$type
);
try {
$parentNode = Tinebase_FileSystem::getInstance()->stat($path);
} catch (Exception $tenf) {
// changed from Exception_NotFound to Exception as there can be other exceptions happening like db fields not found etc
return;
}
**Additional information:** 51b87 setupuser - 2017-05-19T23:51:25+00:00 ERR (3): Setup_Controller::updateApplication::572 SQLSTATE[42S22]: Column not found: 1054 Unknown column 'tree_filerevisions.preview_count' in 'field list', query was: SELECT `tree_nodes`.*, `tree_fileobjects`.`type`, `tree_fileobjects`.`created_by`, `tree_fileobjects`.`creation_time`, `tree_fileobjects`.`last_modified_by`, `tree_fileobjects`.`last_modified_time`, `tree_fileobjects`.`revision`, `tree_fileobjects`.`contenttype`, `tree_fileobjects`.`revision_size`, `tree_fileobjects`.`indexed_hash`, `tree_fileobjects`.`description`, `tree_filerevisions`.`hash`, `tree_filerevisions`.`size`, `tree_filerevisions`.`preview_count`, GROUP_CONCAT( DISTINCT `tree_filerevisions2`.`revision`) AS `available_revisions` FROM `tine20_tree_nodes` AS `tree_nodes`
LEFT JOIN `tine20_tree_fileobjects` AS `tree_fileobjects` ON `tree_nodes`.`object_id` = `tree_fileobjects`.`id`
LEFT JOIN `tine20_tree_filerevisions` AS `tree_filerevisions` ON `tree_fileobjects`.`id` = `tree_filerevisions`.`id` AND `tree_filerevisions`.`revision` = `tree_fileobjects`.`revision`
LEFT JOIN `tine20_tree_filerevisions` AS `tree_filerevisions2` ON `tree_fileobjects`.`id` = `tree_filerevisions2`.`id` WHERE ((`tree_nodes`.`parent_id` IS NULL)) AND ((`tree_nodes`.`name` LIKE ('0bec96ebadb4a70730f692d8210d25e64c40e22d'))) GROUP BY `tree_nodes`.`object_id`
51b87 setupuser - 2017-05-19T23:51:25+00:00 ERR (3): Setup_Controller::updateApplication::573 #0 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Statement.php(303): Zend_Db_Statement_Pdo->_execute(Array)
#1 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Adapter/Abstract.php(480): Zend_Db_Statement->execute(Array)
#2 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('SELECT `tree_no...', Array)
#3 /var/wwwroot/tine20-git/tine20/Tinebase/Backend/Sql/Abstract.php(762): Zend_Db_Adapter_Pdo_Abstract->query(Object(Zend_Db_Select))
#4 /var/wwwroot/tine20-git/tine20/Tinebase/Backend/Sql/Abstract.php(542): Tinebase_Backend_Sql_Abstract->_fetch(Object(Zend_Db_Select), 'fetch_all')
#5 /var/wwwroot/tine20-git/tine20/Tinebase/Tree/Node.php(258): Tinebase_Backend_Sql_Abstract->search(Object(Tinebase_Model_Tree_Node_Filter))
#6 /var/wwwroot/tine20-git/tine20/Tinebase/FileSystem.php(1239): Tinebase_Tree_Node->getChild(NULL, '0bec96ebadb4a70...')
#7 /var/wwwroot/tine20-git/tine20/Tinebase/Setup/Update/Release10.php(810): Tinebase_FileSystem->stat('/0bec96ebadb4a7...')
#8 /var/wwwroot/tine20-git/tine20/Tinebase/Setup/Update/Release10.php(792): Tinebase_Setup_Update_Release10->_migrateAclForApplication(Object(Tinebase_Model_Application), 'personal')
#9 /var/wwwroot/tine20-git/tine20/Setup/Controller.php(566): Tinebase_Setup_Update_Release10->update_16()
| 1.0 | 0013106:
Update Tinebase from 10.16 to 10.24 fails - **Reported by ingoratsdorf on 20 May 2017 00:03**
Tinebase introduced a filesystem preview in 10.23, however during upgrade from 10.16 to 10.17 there is a call for filesystem.stat that will fail as Tinebase tries a DB JOIN LEFT with the preview field and subsequently fails (as preview db field will only be added in 10.23.
The stat() call in enclosed in try..catch but only catches ExceptionNotFound, not DB exceptions, hence the upgrade fails with an uncaught exception.
**Steps to reproduce:** Tinabase/Setup/Update/Release10.php, line 803:
protected function _migrateAclForApplication($application, $type)
{
$path = Tinebase_FileSystem::getInstance()->getApplicationBasePath(
$application->name,
$type
);
try {
$parentNode = Tinebase_FileSystem::getInstance()->stat($path);
} catch (Exception $tenf) {
// changed from Exception_NotFound to Exception as there can be other exceptions happening like db fields not found etc
return;
}
**Additional information:** 51b87 setupuser - 2017-05-19T23:51:25+00:00 ERR (3): Setup_Controller::updateApplication::572 SQLSTATE[42S22]: Column not found: 1054 Unknown column 'tree_filerevisions.preview_count' in 'field list', query was: SELECT `tree_nodes`.*, `tree_fileobjects`.`type`, `tree_fileobjects`.`created_by`, `tree_fileobjects`.`creation_time`, `tree_fileobjects`.`last_modified_by`, `tree_fileobjects`.`last_modified_time`, `tree_fileobjects`.`revision`, `tree_fileobjects`.`contenttype`, `tree_fileobjects`.`revision_size`, `tree_fileobjects`.`indexed_hash`, `tree_fileobjects`.`description`, `tree_filerevisions`.`hash`, `tree_filerevisions`.`size`, `tree_filerevisions`.`preview_count`, GROUP_CONCAT( DISTINCT `tree_filerevisions2`.`revision`) AS `available_revisions` FROM `tine20_tree_nodes` AS `tree_nodes`
LEFT JOIN `tine20_tree_fileobjects` AS `tree_fileobjects` ON `tree_nodes`.`object_id` = `tree_fileobjects`.`id`
LEFT JOIN `tine20_tree_filerevisions` AS `tree_filerevisions` ON `tree_fileobjects`.`id` = `tree_filerevisions`.`id` AND `tree_filerevisions`.`revision` = `tree_fileobjects`.`revision`
LEFT JOIN `tine20_tree_filerevisions` AS `tree_filerevisions2` ON `tree_fileobjects`.`id` = `tree_filerevisions2`.`id` WHERE ((`tree_nodes`.`parent_id` IS NULL)) AND ((`tree_nodes`.`name` LIKE ('0bec96ebadb4a70730f692d8210d25e64c40e22d'))) GROUP BY `tree_nodes`.`object_id`
51b87 setupuser - 2017-05-19T23:51:25+00:00 ERR (3): Setup_Controller::updateApplication::573 #0 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Statement.php(303): Zend_Db_Statement_Pdo->_execute(Array)
#1 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Adapter/Abstract.php(480): Zend_Db_Statement->execute(Array)
#2 /var/wwwroot/tine20-git/tine20/vendor/zendframework/zendframework1/library/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('SELECT `tree_no...', Array)
#3 /var/wwwroot/tine20-git/tine20/Tinebase/Backend/Sql/Abstract.php(762): Zend_Db_Adapter_Pdo_Abstract->query(Object(Zend_Db_Select))
#4 /var/wwwroot/tine20-git/tine20/Tinebase/Backend/Sql/Abstract.php(542): Tinebase_Backend_Sql_Abstract->_fetch(Object(Zend_Db_Select), 'fetch_all')
#5 /var/wwwroot/tine20-git/tine20/Tinebase/Tree/Node.php(258): Tinebase_Backend_Sql_Abstract->search(Object(Tinebase_Model_Tree_Node_Filter))
#6 /var/wwwroot/tine20-git/tine20/Tinebase/FileSystem.php(1239): Tinebase_Tree_Node->getChild(NULL, '0bec96ebadb4a70...')
#7 /var/wwwroot/tine20-git/tine20/Tinebase/Setup/Update/Release10.php(810): Tinebase_FileSystem->stat('/0bec96ebadb4a7...')
#8 /var/wwwroot/tine20-git/tine20/Tinebase/Setup/Update/Release10.php(792): Tinebase_Setup_Update_Release10->_migrateAclForApplication(Object(Tinebase_Model_Application), 'personal')
#9 /var/wwwroot/tine20-git/tine20/Setup/Controller.php(566): Tinebase_Setup_Update_Release10->update_16()
| non_reli | update tinebase from to fails reported by ingoratsdorf on may tinebase introduced a filesystem preview in however during upgrade from to there is a call for filesystem stat that will fail as tinebase tries a db join left with the preview field and subsequently fails as preview db field will only be added in the stat call in enclosed in try catch but only catches exceptionnotfound not db exceptions hence the upgrade fails with an uncaught exception steps to reproduce tinabase setup update php line protected function migrateaclforapplication application type path tinebase filesystem getinstance gt getapplicationbasepath application gt name type try parentnode tinebase filesystem getinstance gt stat path catch exception tenf changed from exception notfound to exception as there can be other exceptions happening like db fields not found etc return additional information setupuser err setup controller updateapplication sqlstate column not found unknown column tree filerevisions preview count in field list query was select tree nodes tree fileobjects type tree fileobjects created by tree fileobjects creation time tree fileobjects last modified by tree fileobjects last modified time tree fileobjects revision tree fileobjects contenttype tree fileobjects revision size tree fileobjects indexed hash tree fileobjects description tree filerevisions hash tree filerevisions size tree filerevisions preview count group concat distinct tree revision as available revisions from tree nodes as tree nodes left join tree fileobjects as tree fileobjects on tree nodes object id tree fileobjects id left join tree filerevisions as tree filerevisions on tree fileobjects id tree filerevisions id and tree filerevisions revision tree fileobjects revision left join tree filerevisions as tree on tree fileobjects id tree id where tree nodes parent id is null and tree nodes name like group by tree nodes object id setupuser err setup controller updateapplication var wwwroot git vendor zendframework library zend db statement php zend db statement pdo gt execute array var wwwroot git vendor zendframework library zend db adapter abstract php zend db statement gt execute array var wwwroot git vendor zendframework library zend db adapter pdo abstract php zend db adapter abstract gt query select tree no array var wwwroot git tinebase backend sql abstract php zend db adapter pdo abstract gt query object zend db select var wwwroot git tinebase backend sql abstract php tinebase backend sql abstract gt fetch object zend db select fetch all var wwwroot git tinebase tree node php tinebase backend sql abstract gt search object tinebase model tree node filter var wwwroot git tinebase filesystem php tinebase tree node gt getchild null var wwwroot git tinebase setup update php tinebase filesystem gt stat var wwwroot git tinebase setup update php tinebase setup update gt migrateaclforapplication object tinebase model application personal var wwwroot git setup controller php tinebase setup update gt update | 0 |
1,575 | 17,153,215,257 | IssuesEvent | 2021-07-14 00:58:00 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Osu!Lazer doesn't open window | missing details type:reliability | This has been a problem for a while, and I enjoy Lazer over stable because of the mania keystrokes. It's been happening for over 3 months now, with me trying to open lazer and nothing happening. I assume it was because one time it was bugging, and I continued to restart it, and then it wouldn't open from then on. I'm currently on the latest version of Windows 10, 64 bit using an Alienware m17 if that helps at all, I'd just like to be able to use lazer again | True | Osu!Lazer doesn't open window - This has been a problem for a while, and I enjoy Lazer over stable because of the mania keystrokes. It's been happening for over 3 months now, with me trying to open lazer and nothing happening. I assume it was because one time it was bugging, and I continued to restart it, and then it wouldn't open from then on. I'm currently on the latest version of Windows 10, 64 bit using an Alienware m17 if that helps at all, I'd just like to be able to use lazer again | reli | osu lazer doesn t open window this has been a problem for a while and i enjoy lazer over stable because of the mania keystrokes it s been happening for over months now with me trying to open lazer and nothing happening i assume it was because one time it was bugging and i continued to restart it and then it wouldn t open from then on i m currently on the latest version of windows bit using an alienware if that helps at all i d just like to be able to use lazer again | 1 |
1,501 | 16,608,422,852 | IssuesEvent | 2021-06-02 08:00:12 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | [BUG] Azure-storage-blob Hitting PoolAcquireTimeoutException on getAccountInfo call. | Azure.Core Client HttpClient customer-reported needs-author-feedback no-recent-activity question tenet-reliability | **Describe the bug**
A clear and concise description of what the bug is.
***Exception or Stack Trace***
`reactor.core.Exceptions.ReactiveException: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms,`
`Caused by: reactor.core.Exceptions$ReactiveException: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
at reactor.core.Exceptions.propagate(Exceptions.java:336) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:91) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.Mono.block(Mono.java:1663) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at
Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:93) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.Mono.block(Mono.java:1663) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]`
`Caused by: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
at reactor.netty.internal.shaded.reactor.pool.AbstractPool$Borrower.run(AbstractPool.java:317) ~[reactor-netty-0.9.0.RELEASE.jar:0.9.0.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_172]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_172]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_172]
... 1 more`
**To Reproduce**
Steps to reproduce the behavior:
`SkuName accountType = storageClient.getAccountInfo().block().getSkuName(); `
The exception is coming in above code. We kill the thread on the exception and create a new thread which retry this and fails again. The issue does not recover till the host is restarted.
***Code Snippet***
`SkuName accountType = storageClient.getAccountInfo().block().getSkuName(); `
**Expected behavior**
A transient error should not cause the host to go in bad state and a new thread should not stuck in this error loop.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Setup (please complete the following information):**
- OS: [e.g. iOS]
- IDE : [e.g. IntelliJ]
- Version of the Library used: azure-storage-blob 12.0.1
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [x] Bug Description Added
- [x] Repro Steps Added
- [x] Setup information Added
| True | [BUG] Azure-storage-blob Hitting PoolAcquireTimeoutException on getAccountInfo call. - **Describe the bug**
A clear and concise description of what the bug is.
***Exception or Stack Trace***
`reactor.core.Exceptions.ReactiveException: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms,`
`Caused by: reactor.core.Exceptions$ReactiveException: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
at reactor.core.Exceptions.propagate(Exceptions.java:336) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:91) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.Mono.block(Mono.java:1663) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at
Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:93) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.publisher.Mono.block(Mono.java:1663) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]`
`Caused by: reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
at reactor.netty.internal.shaded.reactor.pool.AbstractPool$Borrower.run(AbstractPool.java:317) ~[reactor-netty-0.9.0.RELEASE.jar:0.9.0.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_172]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_172]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_172]
... 1 more`
**To Reproduce**
Steps to reproduce the behavior:
`SkuName accountType = storageClient.getAccountInfo().block().getSkuName(); `
The exception is coming in above code. We kill the thread on the exception and create a new thread which retry this and fails again. The issue does not recover till the host is restarted.
***Code Snippet***
`SkuName accountType = storageClient.getAccountInfo().block().getSkuName(); `
**Expected behavior**
A transient error should not cause the host to go in bad state and a new thread should not stuck in this error loop.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Setup (please complete the following information):**
- OS: [e.g. iOS]
- IDE : [e.g. IntelliJ]
- Version of the Library used: azure-storage-blob 12.0.1
**Additional context**
Add any other context about the problem here.
**Information Checklist**
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
- [x] Bug Description Added
- [x] Repro Steps Added
- [x] Setup information Added
| reli | azure storage blob hitting poolacquiretimeoutexception on getaccountinfo call describe the bug a clear and concise description of what the bug is exception or stack trace reactor core exceptions reactiveexception reactor netty internal shaded reactor pool poolacquiretimeoutexception pool acquire duration has been pending for more than the configured timeout of reactor netty internal shaded reactor pool poolacquiretimeoutexception pool acquire duration has been pending for more than the configured timeout of caused by reactor core exceptions reactiveexception reactor netty internal shaded reactor pool poolacquiretimeoutexception pool acquire duration has been pending for more than the configured timeout of at reactor core exceptions propagate exceptions java at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher mono block mono java at suppressed java lang exception block terminated with an error at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher mono block mono java caused by reactor netty internal shaded reactor pool poolacquiretimeoutexception pool acquire duration has been pending for more than the configured timeout of at reactor netty internal shaded reactor pool abstractpool borrower run abstractpool java at reactor core scheduler schedulertask call schedulertask java at reactor core scheduler schedulertask call schedulertask java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java more to reproduce steps to reproduce the behavior skuname accounttype storageclient getaccountinfo block getskuname the exception is coming in above code we kill the thread on the exception and create a new thread which retry this and fails again the issue does not recover till the host is restarted code snippet skuname accounttype storageclient getaccountinfo block getskuname expected behavior a transient error should not cause the host to go in bad state and a new thread should not stuck in this error loop screenshots if applicable add screenshots to help explain your problem setup please complete the following information os ide version of the library used azure storage blob additional context add any other context about the problem here information checklist kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report bug description added repro steps added setup information added | 1 |
342,091 | 30,608,061,435 | IssuesEvent | 2023-07-23 09:01:47 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix raw_ops.test_tensorflow_TruncateDiv | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix raw_ops.test_tensorflow_TruncateDiv - | | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5635438832/job/15266431343"><img src=https://img.shields.io/badge/-success-success></a>
| non_reli | fix raw ops test tensorflow truncatediv torch a href src numpy a href src jax a href src tensorflow a href src paddle a href src | 0 |
1,690 | 18,714,787,754 | IssuesEvent | 2021-11-03 02:03:00 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Are significant numbers of OperationCanceledException in RoslynCodeAnalysisService normal? Is SolutionChecksumUpdater being too aggressive? | Bug Area-IDE Need More Info Tenet-Reliability | **Version Used**:
VS 2019, 16.10.3
**Steps to Reproduce**:
Unsure. Issue is sporadic.
------------------
* I have a sporadic issue (on multiple machines) where VS is much slower and less responsive than it should be, especially in the C# text editor.
* I made a trace with WPR and had a look with WPA which showed that _thousands_ of exceptions were being thrown from within `ServiceHub.RoslynCodeAnalysisService.exe` (**9,477** exceptions to be exact).
* I'll be happy to share this trace. It's only ~30 seconds long but is 9.8GB in size...
* 
* I attached a debugger to `RoslynCodeAnalysisService.exe` and observed that...
* the `OperationCanceledException` and `TaskCanceledException` exceptions that were being thrown originated from two places calling `CancelationTokenSource.Cancel()`:
* `JsonRpc.DispatchIncomingRequestAsync` after it received a `$/cancelRequest()` from `devenv.exe`
* The 5-minute AppInsights Telemetry timer loop - I assumed this was a red-herring though.
* I noticed whenever the `$/cancelRequest()` message was received, the `RoslynCodeAnalysisService.exe` process still took between 1 and 5 seconds to process, sometimes the last second-order `Task` would take even longer, 10-15 seconds to appear in the Output window. I don't know if that's normal or not.
* The preceding, and following, JsonRpc messages were:
* `GetDocumentHighlightsAsync()`
* `SynchronizeTextAsync()`
* `CalculateDiagnosticsAsync()`
* `CalculateDiagnosticsAsync()`
* `GetSemanticClassificationsAsync()`
* `$/cancelRequest()` - This is the first message I looked at. It caused 4 cascading `OperationCanceledException` exceptions.
* `$/cancelRequest()` - This occurred shortly afterwards and caused 1 immediate `OperationCanceledException` followed by 27 more exceptions.
* `SynchronizeTextAsync()`
* `GetSemanticClassificationsAsync()`
* `CalculateDiagnosticsAsync()`
* `OnGlobalOperationStartedAsync`
* `OnGlobalOperationStoppedAsync`
* (and more of the same)
* So **potential problem 1**: How long should `RoslynCodeAnalysisService.exe` normally take to respond to a `$/cancelRequest()`? And after a `$/cancelRequest()` are slow responses the cause of secondary/knock-on `$/cancelRequest` messages?
* **Potential problem 2**: If `RoslynCodeAnalysisService.exe` is to blame, then why is the editor itself (in `devenv.exe`) so unresponsive and laggy? Why is the UI thread making blocking waits? Unfortunately I didn't have the time today to look into what exactly the UI thread was blocking on, but the fact this unresponsive editor issue always happens whenever WPA shows thousands of exceptions being thrown within a single minute suggests it's related...
* So assuming that `RoslynCodeAnalysisService` _was_ behaving correctly and the cause is excessive `$/cancelRequest()` that's swamping `RoslynCodeAnalysisService.exe`, then why is devenv.exe sending them?
* So I attached a debugger to `devenv.exe` and set a breakpoint on `JsonRpc.InvokeCoreAsync` to see where the `cancelRequest` messages were coming from, and they were all coming from `Microsoft.CodeAnalysis.Features.dll!Microsoft.CodeAnalysis.SolutionCrawler.GlobalOperationAwareIdleProcessor.OnGlobalOperationStarted(object sender, System.EventArgs e)`, specifically the `Microsoft.CodeAnalysis.Remote.SolutionChecksumUpdater` subclass.
* **Potential problem 3**: Why on earth is `SolutionChecksumUpdater` being invoked _hundreds of times per minute_ simply from typing in the C# editor? And why is it sending so many `$/cancelRequest()` messages when cancelling-and-restarting a JsonRpc call seems to be more expensive than cancelling with a cooldown/backoff strategy?
------------
* I've previously experienced the issue and reported it (and made a WPR trace too), but [it didn't get anywhere and my replies with more detail were ignored](https://developercommunity.visualstudio.com/t/slow-vs-editor-experience-keyboard-input-lag-etc-p/1382922).
* However, that previous time, I only had debug stacks from the `devenv.exe` side, not from the `ServiceHub.RoslynCodeAnalysisService.exe` side. So this time I hope I have enough information.
* But I'm asking here first so I can be sure that this isn't a red-herring and that there isn't another underlying issue that exists elsewhere. | True | Are significant numbers of OperationCanceledException in RoslynCodeAnalysisService normal? Is SolutionChecksumUpdater being too aggressive? - **Version Used**:
VS 2019, 16.10.3
**Steps to Reproduce**:
Unsure. Issue is sporadic.
------------------
* I have a sporadic issue (on multiple machines) where VS is much slower and less responsive than it should be, especially in the C# text editor.
* I made a trace with WPR and had a look with WPA which showed that _thousands_ of exceptions were being thrown from within `ServiceHub.RoslynCodeAnalysisService.exe` (**9,477** exceptions to be exact).
* I'll be happy to share this trace. It's only ~30 seconds long but is 9.8GB in size...
* 
* I attached a debugger to `RoslynCodeAnalysisService.exe` and observed that...
* the `OperationCanceledException` and `TaskCanceledException` exceptions that were being thrown originated from two places calling `CancelationTokenSource.Cancel()`:
* `JsonRpc.DispatchIncomingRequestAsync` after it received a `$/cancelRequest()` from `devenv.exe`
* The 5-minute AppInsights Telemetry timer loop - I assumed this was a red-herring though.
* I noticed whenever the `$/cancelRequest()` message was received, the `RoslynCodeAnalysisService.exe` process still took between 1 and 5 seconds to process, sometimes the last second-order `Task` would take even longer, 10-15 seconds to appear in the Output window. I don't know if that's normal or not.
* The preceding, and following, JsonRpc messages were:
* `GetDocumentHighlightsAsync()`
* `SynchronizeTextAsync()`
* `CalculateDiagnosticsAsync()`
* `CalculateDiagnosticsAsync()`
* `GetSemanticClassificationsAsync()`
* `$/cancelRequest()` - This is the first message I looked at. It caused 4 cascading `OperationCanceledException` exceptions.
* `$/cancelRequest()` - This occurred shortly afterwards and caused 1 immediate `OperationCanceledException` followed by 27 more exceptions.
* `SynchronizeTextAsync()`
* `GetSemanticClassificationsAsync()`
* `CalculateDiagnosticsAsync()`
* `OnGlobalOperationStartedAsync`
* `OnGlobalOperationStoppedAsync`
* (and more of the same)
* So **potential problem 1**: How long should `RoslynCodeAnalysisService.exe` normally take to respond to a `$/cancelRequest()`? And after a `$/cancelRequest()` are slow responses the cause of secondary/knock-on `$/cancelRequest` messages?
* **Potential problem 2**: If `RoslynCodeAnalysisService.exe` is to blame, then why is the editor itself (in `devenv.exe`) so unresponsive and laggy? Why is the UI thread making blocking waits? Unfortunately I didn't have the time today to look into what exactly the UI thread was blocking on, but the fact this unresponsive editor issue always happens whenever WPA shows thousands of exceptions being thrown within a single minute suggests it's related...
* So assuming that `RoslynCodeAnalysisService` _was_ behaving correctly and the cause is excessive `$/cancelRequest()` that's swamping `RoslynCodeAnalysisService.exe`, then why is devenv.exe sending them?
* So I attached a debugger to `devenv.exe` and set a breakpoint on `JsonRpc.InvokeCoreAsync` to see where the `cancelRequest` messages were coming from, and they were all coming from `Microsoft.CodeAnalysis.Features.dll!Microsoft.CodeAnalysis.SolutionCrawler.GlobalOperationAwareIdleProcessor.OnGlobalOperationStarted(object sender, System.EventArgs e)`, specifically the `Microsoft.CodeAnalysis.Remote.SolutionChecksumUpdater` subclass.
* **Potential problem 3**: Why on earth is `SolutionChecksumUpdater` being invoked _hundreds of times per minute_ simply from typing in the C# editor? And why is it sending so many `$/cancelRequest()` messages when cancelling-and-restarting a JsonRpc call seems to be more expensive than cancelling with a cooldown/backoff strategy?
------------
* I've previously experienced the issue and reported it (and made a WPR trace too), but [it didn't get anywhere and my replies with more detail were ignored](https://developercommunity.visualstudio.com/t/slow-vs-editor-experience-keyboard-input-lag-etc-p/1382922).
* However, that previous time, I only had debug stacks from the `devenv.exe` side, not from the `ServiceHub.RoslynCodeAnalysisService.exe` side. So this time I hope I have enough information.
* But I'm asking here first so I can be sure that this isn't a red-herring and that there isn't another underlying issue that exists elsewhere. | reli | are significant numbers of operationcanceledexception in roslyncodeanalysisservice normal is solutionchecksumupdater being too aggressive version used vs steps to reproduce unsure issue is sporadic i have a sporadic issue on multiple machines where vs is much slower and less responsive than it should be especially in the c text editor i made a trace with wpr and had a look with wpa which showed that thousands of exceptions were being thrown from within servicehub roslyncodeanalysisservice exe exceptions to be exact i ll be happy to share this trace it s only seconds long but is in size i attached a debugger to roslyncodeanalysisservice exe and observed that the operationcanceledexception and taskcanceledexception exceptions that were being thrown originated from two places calling cancelationtokensource cancel jsonrpc dispatchincomingrequestasync after it received a cancelrequest from devenv exe the minute appinsights telemetry timer loop i assumed this was a red herring though i noticed whenever the cancelrequest message was received the roslyncodeanalysisservice exe process still took between and seconds to process sometimes the last second order task would take even longer seconds to appear in the output window i don t know if that s normal or not the preceding and following jsonrpc messages were getdocumenthighlightsasync synchronizetextasync calculatediagnosticsasync calculatediagnosticsasync getsemanticclassificationsasync cancelrequest this is the first message i looked at it caused cascading operationcanceledexception exceptions cancelrequest this occurred shortly afterwards and caused immediate operationcanceledexception followed by more exceptions synchronizetextasync getsemanticclassificationsasync calculatediagnosticsasync onglobaloperationstartedasync onglobaloperationstoppedasync and more of the same so potential problem how long should roslyncodeanalysisservice exe normally take to respond to a cancelrequest and after a cancelrequest are slow responses the cause of secondary knock on cancelrequest messages potential problem if roslyncodeanalysisservice exe is to blame then why is the editor itself in devenv exe so unresponsive and laggy why is the ui thread making blocking waits unfortunately i didn t have the time today to look into what exactly the ui thread was blocking on but the fact this unresponsive editor issue always happens whenever wpa shows thousands of exceptions being thrown within a single minute suggests it s related so assuming that roslyncodeanalysisservice was behaving correctly and the cause is excessive cancelrequest that s swamping roslyncodeanalysisservice exe then why is devenv exe sending them so i attached a debugger to devenv exe and set a breakpoint on jsonrpc invokecoreasync to see where the cancelrequest messages were coming from and they were all coming from microsoft codeanalysis features dll microsoft codeanalysis solutioncrawler globaloperationawareidleprocessor onglobaloperationstarted object sender system eventargs e specifically the microsoft codeanalysis remote solutionchecksumupdater subclass potential problem why on earth is solutionchecksumupdater being invoked hundreds of times per minute simply from typing in the c editor and why is it sending so many cancelrequest messages when cancelling and restarting a jsonrpc call seems to be more expensive than cancelling with a cooldown backoff strategy i ve previously experienced the issue and reported it and made a wpr trace too but however that previous time i only had debug stacks from the devenv exe side not from the servicehub roslyncodeanalysisservice exe side so this time i hope i have enough information but i m asking here first so i can be sure that this isn t a red herring and that there isn t another underlying issue that exists elsewhere | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.